Jan 06 14:32:10 localhost kernel: Linux version 5.14.0-655.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Mon Dec 29 08:24:22 UTC 2025
Jan 06 14:32:10 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 06 14:32:10 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-655.el9.x86_64 root=UUID=f2a0a5c1-133f-4977-b837-e40b31cbd9cc ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 06 14:32:10 localhost kernel: BIOS-provided physical RAM map:
Jan 06 14:32:10 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 06 14:32:10 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 06 14:32:10 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 06 14:32:10 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 06 14:32:10 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 06 14:32:10 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 06 14:32:10 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 06 14:32:10 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 06 14:32:10 localhost kernel: NX (Execute Disable) protection: active
Jan 06 14:32:10 localhost kernel: APIC: Static calls initialized
Jan 06 14:32:10 localhost kernel: SMBIOS 2.8 present.
Jan 06 14:32:10 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 06 14:32:10 localhost kernel: Hypervisor detected: KVM
Jan 06 14:32:10 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 06 14:32:10 localhost kernel: kvm-clock: using sched offset of 3245936120 cycles
Jan 06 14:32:10 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 06 14:32:10 localhost kernel: tsc: Detected 2800.000 MHz processor
Jan 06 14:32:10 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 06 14:32:10 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 06 14:32:10 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 06 14:32:10 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 06 14:32:10 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 06 14:32:10 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 06 14:32:10 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 06 14:32:10 localhost kernel: Using GB pages for direct mapping
Jan 06 14:32:10 localhost kernel: RAMDISK: [mem 0x2d461000-0x32a28fff]
Jan 06 14:32:10 localhost kernel: ACPI: Early table checksum verification disabled
Jan 06 14:32:10 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 06 14:32:10 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 06 14:32:10 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 06 14:32:10 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 06 14:32:10 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 06 14:32:10 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 06 14:32:10 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 06 14:32:10 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 06 14:32:10 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 06 14:32:10 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 06 14:32:10 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 06 14:32:10 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 06 14:32:10 localhost kernel: No NUMA configuration found
Jan 06 14:32:10 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 06 14:32:10 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 06 14:32:10 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 06 14:32:10 localhost kernel: Zone ranges:
Jan 06 14:32:10 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 06 14:32:10 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 06 14:32:10 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 06 14:32:10 localhost kernel:   Device   empty
Jan 06 14:32:10 localhost kernel: Movable zone start for each node
Jan 06 14:32:10 localhost kernel: Early memory node ranges
Jan 06 14:32:10 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 06 14:32:10 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 06 14:32:10 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 06 14:32:10 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 06 14:32:10 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 06 14:32:10 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 06 14:32:10 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 06 14:32:10 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 06 14:32:10 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 06 14:32:10 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 06 14:32:10 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 06 14:32:10 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 06 14:32:10 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 06 14:32:10 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 06 14:32:10 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 06 14:32:10 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 06 14:32:10 localhost kernel: TSC deadline timer available
Jan 06 14:32:10 localhost kernel: CPU topo: Max. logical packages:   8
Jan 06 14:32:10 localhost kernel: CPU topo: Max. logical dies:       8
Jan 06 14:32:10 localhost kernel: CPU topo: Max. dies per package:   1
Jan 06 14:32:10 localhost kernel: CPU topo: Max. threads per core:   1
Jan 06 14:32:10 localhost kernel: CPU topo: Num. cores per package:     1
Jan 06 14:32:10 localhost kernel: CPU topo: Num. threads per package:   1
Jan 06 14:32:10 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 06 14:32:10 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 06 14:32:10 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 06 14:32:10 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 06 14:32:10 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 06 14:32:10 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 06 14:32:10 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 06 14:32:10 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 06 14:32:10 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 06 14:32:10 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 06 14:32:10 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 06 14:32:10 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 06 14:32:10 localhost kernel: Booting paravirtualized kernel on KVM
Jan 06 14:32:10 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 06 14:32:10 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 06 14:32:10 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 06 14:32:10 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 06 14:32:10 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 06 14:32:10 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 06 14:32:10 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-655.el9.x86_64 root=UUID=f2a0a5c1-133f-4977-b837-e40b31cbd9cc ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 06 14:32:10 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-655.el9.x86_64", will be passed to user space.
Jan 06 14:32:10 localhost kernel: random: crng init done
Jan 06 14:32:10 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 06 14:32:10 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 06 14:32:10 localhost kernel: Fallback order for Node 0: 0 
Jan 06 14:32:10 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 06 14:32:10 localhost kernel: Policy zone: Normal
Jan 06 14:32:10 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 06 14:32:10 localhost kernel: software IO TLB: area num 8.
Jan 06 14:32:10 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 06 14:32:10 localhost kernel: ftrace: allocating 49414 entries in 194 pages
Jan 06 14:32:10 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 06 14:32:10 localhost kernel: Dynamic Preempt: voluntary
Jan 06 14:32:10 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 06 14:32:10 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 06 14:32:10 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 06 14:32:10 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 06 14:32:10 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 06 14:32:10 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 06 14:32:10 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 06 14:32:10 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 06 14:32:10 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 06 14:32:10 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 06 14:32:10 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 06 14:32:10 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 06 14:32:10 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 06 14:32:10 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 06 14:32:10 localhost kernel: Console: colour VGA+ 80x25
Jan 06 14:32:10 localhost kernel: printk: console [ttyS0] enabled
Jan 06 14:32:10 localhost kernel: ACPI: Core revision 20230331
Jan 06 14:32:10 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 06 14:32:10 localhost kernel: x2apic enabled
Jan 06 14:32:10 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 06 14:32:10 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 06 14:32:10 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 06 14:32:10 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 06 14:32:10 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 06 14:32:10 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 06 14:32:10 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 06 14:32:10 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 06 14:32:10 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 06 14:32:10 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 06 14:32:10 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 06 14:32:10 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 06 14:32:10 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 06 14:32:10 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 06 14:32:10 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 06 14:32:10 localhost kernel: x86/bugs: return thunk changed
Jan 06 14:32:10 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 06 14:32:10 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 06 14:32:10 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 06 14:32:10 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 06 14:32:10 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 06 14:32:10 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 06 14:32:10 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 06 14:32:10 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 06 14:32:10 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 06 14:32:10 localhost kernel: landlock: Up and running.
Jan 06 14:32:10 localhost kernel: Yama: becoming mindful.
Jan 06 14:32:10 localhost kernel: SELinux:  Initializing.
Jan 06 14:32:10 localhost kernel: LSM support for eBPF active
Jan 06 14:32:10 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 06 14:32:10 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 06 14:32:10 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 06 14:32:10 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 06 14:32:10 localhost kernel: ... version:                0
Jan 06 14:32:10 localhost kernel: ... bit width:              48
Jan 06 14:32:10 localhost kernel: ... generic registers:      6
Jan 06 14:32:10 localhost kernel: ... value mask:             0000ffffffffffff
Jan 06 14:32:10 localhost kernel: ... max period:             00007fffffffffff
Jan 06 14:32:10 localhost kernel: ... fixed-purpose events:   0
Jan 06 14:32:10 localhost kernel: ... event mask:             000000000000003f
Jan 06 14:32:10 localhost kernel: signal: max sigframe size: 1776
Jan 06 14:32:10 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 06 14:32:10 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 06 14:32:10 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 06 14:32:10 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 06 14:32:10 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 06 14:32:10 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 06 14:32:10 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 06 14:32:10 localhost kernel: node 0 deferred pages initialised in 11ms
Jan 06 14:32:10 localhost kernel: Memory: 7763892K/8388068K available (16384K kernel code, 5796K rwdata, 13908K rodata, 4196K init, 7200K bss, 618248K reserved, 0K cma-reserved)
Jan 06 14:32:10 localhost kernel: devtmpfs: initialized
Jan 06 14:32:10 localhost kernel: x86/mm: Memory block size: 128MB
Jan 06 14:32:10 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 06 14:32:10 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 06 14:32:10 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 06 14:32:10 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 06 14:32:10 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 06 14:32:10 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 06 14:32:10 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 06 14:32:10 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 06 14:32:10 localhost kernel: audit: type=2000 audit(1767709928.631:1): state=initialized audit_enabled=0 res=1
Jan 06 14:32:10 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 06 14:32:10 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 06 14:32:10 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 06 14:32:10 localhost kernel: cpuidle: using governor menu
Jan 06 14:32:10 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 06 14:32:10 localhost kernel: PCI: Using configuration type 1 for base access
Jan 06 14:32:10 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 06 14:32:10 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 06 14:32:10 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 06 14:32:10 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 06 14:32:10 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 06 14:32:10 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 06 14:32:10 localhost kernel: Demotion targets for Node 0: null
Jan 06 14:32:10 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 06 14:32:10 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 06 14:32:10 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 06 14:32:10 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 06 14:32:10 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 06 14:32:10 localhost kernel: ACPI: Interpreter enabled
Jan 06 14:32:10 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 06 14:32:10 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 06 14:32:10 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 06 14:32:10 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 06 14:32:10 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 06 14:32:10 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 06 14:32:10 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [3] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [4] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [5] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [6] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [7] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [8] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [9] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [10] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [11] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [12] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [13] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [14] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [15] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [16] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [17] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [18] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [19] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [20] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [21] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [22] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [23] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [24] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [25] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [26] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [27] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [28] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [29] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [30] registered
Jan 06 14:32:10 localhost kernel: acpiphp: Slot [31] registered
Jan 06 14:32:10 localhost kernel: PCI host bridge to bus 0000:00
Jan 06 14:32:10 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 06 14:32:10 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 06 14:32:10 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 06 14:32:10 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 06 14:32:10 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 06 14:32:10 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 06 14:32:10 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 06 14:32:10 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 06 14:32:10 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 06 14:32:10 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 06 14:32:10 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 06 14:32:10 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 06 14:32:10 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 06 14:32:10 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 06 14:32:10 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 06 14:32:10 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 06 14:32:10 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 06 14:32:10 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 06 14:32:10 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 06 14:32:10 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 06 14:32:10 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 06 14:32:10 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 06 14:32:10 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 06 14:32:10 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 06 14:32:10 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 06 14:32:10 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 06 14:32:10 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 06 14:32:10 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 06 14:32:10 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 06 14:32:10 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 06 14:32:10 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 06 14:32:10 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 06 14:32:10 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 06 14:32:10 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 06 14:32:10 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 06 14:32:10 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 06 14:32:10 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 06 14:32:10 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 06 14:32:10 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 06 14:32:10 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 06 14:32:10 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 06 14:32:10 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 06 14:32:10 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 06 14:32:10 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 06 14:32:10 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 06 14:32:10 localhost kernel: iommu: Default domain type: Translated
Jan 06 14:32:10 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 06 14:32:10 localhost kernel: SCSI subsystem initialized
Jan 06 14:32:10 localhost kernel: ACPI: bus type USB registered
Jan 06 14:32:10 localhost kernel: usbcore: registered new interface driver usbfs
Jan 06 14:32:10 localhost kernel: usbcore: registered new interface driver hub
Jan 06 14:32:10 localhost kernel: usbcore: registered new device driver usb
Jan 06 14:32:10 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 06 14:32:10 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 06 14:32:10 localhost kernel: PTP clock support registered
Jan 06 14:32:10 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 06 14:32:10 localhost kernel: NetLabel: Initializing
Jan 06 14:32:10 localhost kernel: NetLabel:  domain hash size = 128
Jan 06 14:32:10 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 06 14:32:10 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 06 14:32:10 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 06 14:32:10 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 06 14:32:10 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 06 14:32:10 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 06 14:32:10 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 06 14:32:10 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 06 14:32:10 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 06 14:32:10 localhost kernel: vgaarb: loaded
Jan 06 14:32:10 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 06 14:32:10 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 06 14:32:10 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 06 14:32:10 localhost kernel: pnp: PnP ACPI init
Jan 06 14:32:10 localhost kernel: pnp 00:03: [dma 2]
Jan 06 14:32:10 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 06 14:32:10 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 06 14:32:10 localhost kernel: NET: Registered PF_INET protocol family
Jan 06 14:32:10 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 06 14:32:10 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 06 14:32:10 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 06 14:32:10 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 06 14:32:10 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 06 14:32:10 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 06 14:32:10 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 06 14:32:10 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 06 14:32:10 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 06 14:32:10 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 06 14:32:10 localhost kernel: NET: Registered PF_XDP protocol family
Jan 06 14:32:10 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 06 14:32:10 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 06 14:32:10 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 06 14:32:10 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 06 14:32:10 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 06 14:32:10 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 06 14:32:10 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 06 14:32:10 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 06 14:32:10 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 79533 usecs
Jan 06 14:32:10 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 06 14:32:10 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 06 14:32:10 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 06 14:32:10 localhost kernel: ACPI: bus type thunderbolt registered
Jan 06 14:32:10 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 06 14:32:10 localhost kernel: Initialise system trusted keyrings
Jan 06 14:32:10 localhost kernel: Key type blacklist registered
Jan 06 14:32:10 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 06 14:32:10 localhost kernel: zbud: loaded
Jan 06 14:32:10 localhost kernel: integrity: Platform Keyring initialized
Jan 06 14:32:10 localhost kernel: integrity: Machine keyring initialized
Jan 06 14:32:10 localhost kernel: Freeing initrd memory: 87840K
Jan 06 14:32:10 localhost kernel: NET: Registered PF_ALG protocol family
Jan 06 14:32:10 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 06 14:32:10 localhost kernel: Key type asymmetric registered
Jan 06 14:32:10 localhost kernel: Asymmetric key parser 'x509' registered
Jan 06 14:32:10 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 06 14:32:10 localhost kernel: io scheduler mq-deadline registered
Jan 06 14:32:10 localhost kernel: io scheduler kyber registered
Jan 06 14:32:10 localhost kernel: io scheduler bfq registered
Jan 06 14:32:10 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 06 14:32:10 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 06 14:32:10 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 06 14:32:10 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 06 14:32:10 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 06 14:32:10 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 06 14:32:10 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 06 14:32:10 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 06 14:32:10 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 06 14:32:10 localhost kernel: Non-volatile memory driver v1.3
Jan 06 14:32:10 localhost kernel: rdac: device handler registered
Jan 06 14:32:10 localhost kernel: hp_sw: device handler registered
Jan 06 14:32:10 localhost kernel: emc: device handler registered
Jan 06 14:32:10 localhost kernel: alua: device handler registered
Jan 06 14:32:10 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 06 14:32:10 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 06 14:32:10 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 06 14:32:10 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 06 14:32:10 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 06 14:32:10 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 06 14:32:10 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 06 14:32:10 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-655.el9.x86_64 uhci_hcd
Jan 06 14:32:10 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 06 14:32:10 localhost kernel: hub 1-0:1.0: USB hub found
Jan 06 14:32:10 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 06 14:32:10 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 06 14:32:10 localhost kernel: usbserial: USB Serial support registered for generic
Jan 06 14:32:10 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 06 14:32:10 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 06 14:32:10 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 06 14:32:10 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 06 14:32:10 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 06 14:32:10 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 06 14:32:10 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 06 14:32:10 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-06T14:32:09 UTC (1767709929)
Jan 06 14:32:10 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 06 14:32:10 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 06 14:32:10 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 06 14:32:10 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 06 14:32:10 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 06 14:32:10 localhost kernel: usbcore: registered new interface driver usbhid
Jan 06 14:32:10 localhost kernel: usbhid: USB HID core driver
Jan 06 14:32:10 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 06 14:32:10 localhost kernel: Initializing XFRM netlink socket
Jan 06 14:32:10 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 06 14:32:10 localhost kernel: Segment Routing with IPv6
Jan 06 14:32:10 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 06 14:32:10 localhost kernel: mpls_gso: MPLS GSO support
Jan 06 14:32:10 localhost kernel: IPI shorthand broadcast: enabled
Jan 06 14:32:10 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 06 14:32:10 localhost kernel: AES CTR mode by8 optimization enabled
Jan 06 14:32:10 localhost kernel: sched_clock: Marking stable (1306008080, 144557090)->(1526795149, -76229979)
Jan 06 14:32:10 localhost kernel: registered taskstats version 1
Jan 06 14:32:10 localhost kernel: Loading compiled-in X.509 certificates
Jan 06 14:32:10 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: cff02aed51f99e4030f8d5c362e1fce40d054fe7'
Jan 06 14:32:10 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 06 14:32:10 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 06 14:32:10 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 06 14:32:10 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 06 14:32:10 localhost kernel: Demotion targets for Node 0: null
Jan 06 14:32:10 localhost kernel: page_owner is disabled
Jan 06 14:32:10 localhost kernel: Key type .fscrypt registered
Jan 06 14:32:10 localhost kernel: Key type fscrypt-provisioning registered
Jan 06 14:32:10 localhost kernel: Key type big_key registered
Jan 06 14:32:10 localhost kernel: Key type encrypted registered
Jan 06 14:32:10 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 06 14:32:10 localhost kernel: Loading compiled-in module X.509 certificates
Jan 06 14:32:10 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: cff02aed51f99e4030f8d5c362e1fce40d054fe7'
Jan 06 14:32:10 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 06 14:32:10 localhost kernel: ima: No architecture policies found
Jan 06 14:32:10 localhost kernel: evm: Initialising EVM extended attributes:
Jan 06 14:32:10 localhost kernel: evm: security.selinux
Jan 06 14:32:10 localhost kernel: evm: security.SMACK64 (disabled)
Jan 06 14:32:10 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 06 14:32:10 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 06 14:32:10 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 06 14:32:10 localhost kernel: evm: security.apparmor (disabled)
Jan 06 14:32:10 localhost kernel: evm: security.ima
Jan 06 14:32:10 localhost kernel: evm: security.capability
Jan 06 14:32:10 localhost kernel: evm: HMAC attrs: 0x1
Jan 06 14:32:10 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 06 14:32:10 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 06 14:32:10 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 06 14:32:10 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 06 14:32:10 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 06 14:32:10 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 06 14:32:10 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 06 14:32:10 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 06 14:32:10 localhost kernel: Running certificate verification RSA selftest
Jan 06 14:32:10 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 06 14:32:10 localhost kernel: Running certificate verification ECDSA selftest
Jan 06 14:32:10 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 06 14:32:10 localhost kernel: clk: Disabling unused clocks
Jan 06 14:32:10 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 06 14:32:10 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Jan 06 14:32:10 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 06 14:32:10 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Jan 06 14:32:10 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 06 14:32:10 localhost kernel: Run /init as init process
Jan 06 14:32:10 localhost kernel:   with arguments:
Jan 06 14:32:10 localhost kernel:     /init
Jan 06 14:32:10 localhost kernel:   with environment:
Jan 06 14:32:10 localhost kernel:     HOME=/
Jan 06 14:32:10 localhost kernel:     TERM=linux
Jan 06 14:32:10 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-655.el9.x86_64
Jan 06 14:32:10 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 06 14:32:10 localhost systemd[1]: Detected virtualization kvm.
Jan 06 14:32:10 localhost systemd[1]: Detected architecture x86-64.
Jan 06 14:32:10 localhost systemd[1]: Running in initrd.
Jan 06 14:32:10 localhost systemd[1]: No hostname configured, using default hostname.
Jan 06 14:32:10 localhost systemd[1]: Hostname set to <localhost>.
Jan 06 14:32:10 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 06 14:32:10 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 06 14:32:10 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 06 14:32:10 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 06 14:32:10 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 06 14:32:10 localhost systemd[1]: Reached target Local File Systems.
Jan 06 14:32:10 localhost systemd[1]: Reached target Path Units.
Jan 06 14:32:10 localhost systemd[1]: Reached target Slice Units.
Jan 06 14:32:10 localhost systemd[1]: Reached target Swaps.
Jan 06 14:32:10 localhost systemd[1]: Reached target Timer Units.
Jan 06 14:32:10 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 06 14:32:10 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 06 14:32:10 localhost systemd[1]: Listening on Journal Socket.
Jan 06 14:32:10 localhost systemd[1]: Listening on udev Control Socket.
Jan 06 14:32:10 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 06 14:32:10 localhost systemd[1]: Reached target Socket Units.
Jan 06 14:32:10 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 06 14:32:10 localhost systemd[1]: Starting Journal Service...
Jan 06 14:32:10 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 06 14:32:10 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 06 14:32:10 localhost systemd[1]: Starting Create System Users...
Jan 06 14:32:10 localhost systemd[1]: Starting Setup Virtual Console...
Jan 06 14:32:10 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 06 14:32:10 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 06 14:32:10 localhost systemd[1]: Finished Create System Users.
Jan 06 14:32:10 localhost systemd-journald[303]: Journal started
Jan 06 14:32:10 localhost systemd-journald[303]: Runtime Journal (/run/log/journal/f243d16a3dea407f9cc341cda7bb8d99) is 8.0M, max 153.6M, 145.6M free.
Jan 06 14:32:10 localhost systemd-sysusers[308]: Creating group 'users' with GID 100.
Jan 06 14:32:10 localhost systemd-sysusers[308]: Creating group 'dbus' with GID 81.
Jan 06 14:32:10 localhost systemd-sysusers[308]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 06 14:32:10 localhost systemd[1]: Started Journal Service.
Jan 06 14:32:10 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 06 14:32:10 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 06 14:32:10 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 06 14:32:10 localhost systemd[1]: Finished Setup Virtual Console.
Jan 06 14:32:10 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 06 14:32:10 localhost systemd[1]: Starting dracut cmdline hook...
Jan 06 14:32:10 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 06 14:32:10 localhost dracut-cmdline[323]: dracut-9 dracut-057-102.git20250818.el9
Jan 06 14:32:10 localhost dracut-cmdline[323]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-655.el9.x86_64 root=UUID=f2a0a5c1-133f-4977-b837-e40b31cbd9cc ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 06 14:32:10 localhost systemd[1]: Finished dracut cmdline hook.
Jan 06 14:32:10 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 06 14:32:10 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 06 14:32:10 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 06 14:32:10 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 06 14:32:10 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 06 14:32:10 localhost kernel: RPC: Registered udp transport module.
Jan 06 14:32:10 localhost kernel: RPC: Registered tcp transport module.
Jan 06 14:32:10 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 06 14:32:10 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 06 14:32:11 localhost rpc.statd[442]: Version 2.5.4 starting
Jan 06 14:32:11 localhost rpc.statd[442]: Initializing NSM state
Jan 06 14:32:11 localhost rpc.idmapd[447]: Setting log level to 0
Jan 06 14:32:11 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 06 14:32:11 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 06 14:32:11 localhost systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Jan 06 14:32:11 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 06 14:32:11 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 06 14:32:11 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 06 14:32:11 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 06 14:32:11 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 06 14:32:11 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 06 14:32:11 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 06 14:32:11 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 06 14:32:11 localhost systemd[1]: Reached target Network.
Jan 06 14:32:11 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 06 14:32:11 localhost systemd[1]: Starting dracut initqueue hook...
Jan 06 14:32:11 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 06 14:32:11 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 06 14:32:11 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 06 14:32:11 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 06 14:32:11 localhost systemd[1]: Reached target System Initialization.
Jan 06 14:32:11 localhost systemd[1]: Reached target Basic System.
Jan 06 14:32:11 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 06 14:32:11 localhost kernel: libata version 3.00 loaded.
Jan 06 14:32:11 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 06 14:32:11 localhost kernel:  vda: vda1
Jan 06 14:32:11 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 06 14:32:11 localhost kernel: scsi host0: ata_piix
Jan 06 14:32:11 localhost systemd-udevd[478]: Network interface NamePolicy= disabled on kernel command line.
Jan 06 14:32:11 localhost kernel: scsi host1: ata_piix
Jan 06 14:32:11 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 06 14:32:11 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 06 14:32:11 localhost systemd[1]: Found device /dev/disk/by-uuid/f2a0a5c1-133f-4977-b837-e40b31cbd9cc.
Jan 06 14:32:11 localhost systemd[1]: Reached target Initrd Root Device.
Jan 06 14:32:11 localhost kernel: ata1: found unknown device (class 0)
Jan 06 14:32:11 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 06 14:32:11 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 06 14:32:11 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 06 14:32:11 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 06 14:32:11 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 06 14:32:11 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 06 14:32:11 localhost systemd[1]: Finished dracut initqueue hook.
Jan 06 14:32:11 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 06 14:32:11 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 06 14:32:11 localhost systemd[1]: Reached target Remote File Systems.
Jan 06 14:32:11 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 06 14:32:11 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 06 14:32:11 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/f2a0a5c1-133f-4977-b837-e40b31cbd9cc...
Jan 06 14:32:11 localhost systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Jan 06 14:32:11 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/f2a0a5c1-133f-4977-b837-e40b31cbd9cc.
Jan 06 14:32:11 localhost systemd[1]: Mounting /sysroot...
Jan 06 14:32:12 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 06 14:32:12 localhost kernel: XFS (vda1): Mounting V5 Filesystem f2a0a5c1-133f-4977-b837-e40b31cbd9cc
Jan 06 14:32:12 localhost kernel: XFS (vda1): Ending clean mount
Jan 06 14:32:12 localhost systemd[1]: Mounted /sysroot.
Jan 06 14:32:12 localhost systemd[1]: Reached target Initrd Root File System.
Jan 06 14:32:12 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 06 14:32:12 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 06 14:32:12 localhost systemd[1]: Reached target Initrd File Systems.
Jan 06 14:32:12 localhost systemd[1]: Reached target Initrd Default Target.
Jan 06 14:32:12 localhost systemd[1]: Starting dracut mount hook...
Jan 06 14:32:12 localhost systemd[1]: Finished dracut mount hook.
Jan 06 14:32:12 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 06 14:32:12 localhost rpc.idmapd[447]: exiting on signal 15
Jan 06 14:32:12 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 06 14:32:12 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 06 14:32:12 localhost systemd[1]: Stopped target Network.
Jan 06 14:32:12 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 06 14:32:12 localhost systemd[1]: Stopped target Timer Units.
Jan 06 14:32:12 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 06 14:32:12 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 06 14:32:12 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 06 14:32:12 localhost systemd[1]: Stopped target Basic System.
Jan 06 14:32:12 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 06 14:32:12 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 06 14:32:12 localhost systemd[1]: Stopped target Path Units.
Jan 06 14:32:12 localhost systemd[1]: Stopped target Remote File Systems.
Jan 06 14:32:12 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 06 14:32:12 localhost systemd[1]: Stopped target Slice Units.
Jan 06 14:32:12 localhost systemd[1]: Stopped target Socket Units.
Jan 06 14:32:12 localhost systemd[1]: Stopped target System Initialization.
Jan 06 14:32:12 localhost systemd[1]: Stopped target Local File Systems.
Jan 06 14:32:12 localhost systemd[1]: Stopped target Swaps.
Jan 06 14:32:12 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Stopped dracut mount hook.
Jan 06 14:32:12 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 06 14:32:12 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 06 14:32:12 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 06 14:32:12 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 06 14:32:12 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 06 14:32:12 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 06 14:32:12 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 06 14:32:12 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 06 14:32:12 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 06 14:32:12 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 06 14:32:12 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 06 14:32:12 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 06 14:32:12 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Closed udev Control Socket.
Jan 06 14:32:12 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Closed udev Kernel Socket.
Jan 06 14:32:12 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 06 14:32:12 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 06 14:32:12 localhost systemd[1]: Starting Cleanup udev Database...
Jan 06 14:32:12 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 06 14:32:12 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 06 14:32:12 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Stopped Create System Users.
Jan 06 14:32:12 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 06 14:32:12 localhost systemd[1]: Finished Cleanup udev Database.
Jan 06 14:32:12 localhost systemd[1]: Reached target Switch Root.
Jan 06 14:32:12 localhost systemd[1]: Starting Switch Root...
Jan 06 14:32:12 localhost systemd[1]: Switching root.
Jan 06 14:32:12 localhost systemd-journald[303]: Journal stopped
Jan 06 14:32:13 localhost systemd-journald[303]: Received SIGTERM from PID 1 (systemd).
Jan 06 14:32:13 localhost kernel: audit: type=1404 audit(1767709932.868:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 06 14:32:13 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 06 14:32:13 localhost kernel: SELinux:  policy capability open_perms=1
Jan 06 14:32:13 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 06 14:32:13 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 06 14:32:13 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 06 14:32:13 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 06 14:32:13 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 06 14:32:13 localhost kernel: audit: type=1403 audit(1767709932.990:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 06 14:32:13 localhost systemd[1]: Successfully loaded SELinux policy in 124.455ms.
Jan 06 14:32:13 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.651ms.
Jan 06 14:32:13 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 06 14:32:13 localhost systemd[1]: Detected virtualization kvm.
Jan 06 14:32:13 localhost systemd[1]: Detected architecture x86-64.
Jan 06 14:32:13 localhost systemd-rc-local-generator[635]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 14:32:13 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 06 14:32:13 localhost systemd[1]: Stopped Switch Root.
Jan 06 14:32:13 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 06 14:32:13 localhost systemd[1]: Created slice Slice /system/getty.
Jan 06 14:32:13 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 06 14:32:13 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 06 14:32:13 localhost systemd[1]: Created slice User and Session Slice.
Jan 06 14:32:13 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 06 14:32:13 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 06 14:32:13 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 06 14:32:13 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 06 14:32:13 localhost systemd[1]: Stopped target Switch Root.
Jan 06 14:32:13 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 06 14:32:13 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 06 14:32:13 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 06 14:32:13 localhost systemd[1]: Reached target Path Units.
Jan 06 14:32:13 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 06 14:32:13 localhost systemd[1]: Reached target Slice Units.
Jan 06 14:32:13 localhost systemd[1]: Reached target Swaps.
Jan 06 14:32:13 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 06 14:32:13 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 06 14:32:13 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 06 14:32:13 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 06 14:32:13 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 06 14:32:13 localhost systemd[1]: Listening on udev Control Socket.
Jan 06 14:32:13 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 06 14:32:13 localhost systemd[1]: Mounting Huge Pages File System...
Jan 06 14:32:13 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 06 14:32:13 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 06 14:32:13 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 06 14:32:13 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 06 14:32:13 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 06 14:32:13 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 06 14:32:13 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 06 14:32:13 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 06 14:32:13 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 06 14:32:13 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 06 14:32:13 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 06 14:32:13 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 06 14:32:13 localhost systemd[1]: Stopped Journal Service.
Jan 06 14:32:13 localhost kernel: fuse: init (API version 7.37)
Jan 06 14:32:13 localhost systemd[1]: Starting Journal Service...
Jan 06 14:32:13 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 06 14:32:13 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 06 14:32:13 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 06 14:32:13 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 06 14:32:13 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 06 14:32:13 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 06 14:32:13 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 06 14:32:13 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 06 14:32:13 localhost systemd-journald[676]: Journal started
Jan 06 14:32:13 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/bfa963f84c4f244b9e78b91a43b5e88e) is 8.0M, max 153.6M, 145.6M free.
Jan 06 14:32:13 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 06 14:32:13 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 06 14:32:13 localhost systemd[1]: Started Journal Service.
Jan 06 14:32:13 localhost systemd[1]: Mounted Huge Pages File System.
Jan 06 14:32:13 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 06 14:32:13 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 06 14:32:13 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 06 14:32:13 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 06 14:32:13 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 06 14:32:13 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 06 14:32:13 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 06 14:32:13 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 06 14:32:13 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 06 14:32:13 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 06 14:32:13 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 06 14:32:13 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 06 14:32:13 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 06 14:32:13 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 06 14:32:13 localhost kernel: ACPI: bus type drm_connector registered
Jan 06 14:32:13 localhost systemd[1]: Mounting FUSE Control File System...
Jan 06 14:32:13 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 06 14:32:13 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 06 14:32:13 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 06 14:32:13 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 06 14:32:13 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 06 14:32:13 localhost systemd[1]: Starting Create System Users...
Jan 06 14:32:13 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 06 14:32:13 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/bfa963f84c4f244b9e78b91a43b5e88e) is 8.0M, max 153.6M, 145.6M free.
Jan 06 14:32:13 localhost systemd-journald[676]: Received client request to flush runtime journal.
Jan 06 14:32:13 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 06 14:32:13 localhost systemd[1]: Mounted FUSE Control File System.
Jan 06 14:32:13 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 06 14:32:13 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 06 14:32:13 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 06 14:32:13 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 06 14:32:13 localhost systemd[1]: Finished Create System Users.
Jan 06 14:32:13 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 06 14:32:13 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 06 14:32:13 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 06 14:32:13 localhost systemd[1]: Reached target Local File Systems.
Jan 06 14:32:13 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 06 14:32:13 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 06 14:32:13 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 06 14:32:13 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 06 14:32:13 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 06 14:32:13 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 06 14:32:13 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 06 14:32:13 localhost bootctl[695]: Couldn't find EFI system partition, skipping.
Jan 06 14:32:13 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 06 14:32:13 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 06 14:32:13 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 06 14:32:13 localhost systemd[1]: Starting Security Auditing Service...
Jan 06 14:32:13 localhost systemd[1]: Starting RPC Bind...
Jan 06 14:32:13 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 06 14:32:13 localhost auditd[700]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 06 14:32:13 localhost auditd[700]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 06 14:32:13 localhost systemd[1]: Started RPC Bind.
Jan 06 14:32:13 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 06 14:32:13 localhost augenrules[706]: /sbin/augenrules: No change
Jan 06 14:32:13 localhost augenrules[721]: No rules
Jan 06 14:32:13 localhost augenrules[721]: enabled 1
Jan 06 14:32:13 localhost augenrules[721]: failure 1
Jan 06 14:32:13 localhost augenrules[721]: pid 700
Jan 06 14:32:13 localhost augenrules[721]: rate_limit 0
Jan 06 14:32:13 localhost augenrules[721]: backlog_limit 8192
Jan 06 14:32:13 localhost augenrules[721]: lost 0
Jan 06 14:32:13 localhost augenrules[721]: backlog 3
Jan 06 14:32:13 localhost augenrules[721]: backlog_wait_time 60000
Jan 06 14:32:13 localhost augenrules[721]: backlog_wait_time_actual 0
Jan 06 14:32:13 localhost augenrules[721]: enabled 1
Jan 06 14:32:13 localhost augenrules[721]: failure 1
Jan 06 14:32:13 localhost augenrules[721]: pid 700
Jan 06 14:32:13 localhost augenrules[721]: rate_limit 0
Jan 06 14:32:13 localhost augenrules[721]: backlog_limit 8192
Jan 06 14:32:13 localhost augenrules[721]: lost 0
Jan 06 14:32:13 localhost augenrules[721]: backlog 0
Jan 06 14:32:13 localhost augenrules[721]: backlog_wait_time 60000
Jan 06 14:32:13 localhost augenrules[721]: backlog_wait_time_actual 0
Jan 06 14:32:13 localhost augenrules[721]: enabled 1
Jan 06 14:32:13 localhost augenrules[721]: failure 1
Jan 06 14:32:13 localhost augenrules[721]: pid 700
Jan 06 14:32:13 localhost augenrules[721]: rate_limit 0
Jan 06 14:32:13 localhost augenrules[721]: backlog_limit 8192
Jan 06 14:32:13 localhost augenrules[721]: lost 0
Jan 06 14:32:13 localhost augenrules[721]: backlog 0
Jan 06 14:32:13 localhost augenrules[721]: backlog_wait_time 60000
Jan 06 14:32:13 localhost augenrules[721]: backlog_wait_time_actual 0
Jan 06 14:32:13 localhost systemd[1]: Started Security Auditing Service.
Jan 06 14:32:13 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 06 14:32:13 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 06 14:32:14 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 06 14:32:14 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 06 14:32:14 localhost systemd[1]: Starting Update is Completed...
Jan 06 14:32:14 localhost systemd[1]: Finished Update is Completed.
Jan 06 14:32:14 localhost systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Jan 06 14:32:14 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 06 14:32:14 localhost systemd[1]: Reached target System Initialization.
Jan 06 14:32:14 localhost systemd[1]: Started dnf makecache --timer.
Jan 06 14:32:14 localhost systemd[1]: Started Daily rotation of log files.
Jan 06 14:32:14 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 06 14:32:14 localhost systemd[1]: Reached target Timer Units.
Jan 06 14:32:14 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 06 14:32:14 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 06 14:32:14 localhost systemd[1]: Reached target Socket Units.
Jan 06 14:32:14 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 06 14:32:14 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 06 14:32:14 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 06 14:32:14 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 06 14:32:14 localhost systemd-udevd[738]: Network interface NamePolicy= disabled on kernel command line.
Jan 06 14:32:14 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 06 14:32:14 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 06 14:32:14 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 06 14:32:14 localhost systemd[1]: Reached target Basic System.
Jan 06 14:32:14 localhost dbus-broker-lau[741]: Ready
Jan 06 14:32:14 localhost systemd[1]: Starting NTP client/server...
Jan 06 14:32:14 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 06 14:32:14 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 06 14:32:14 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 06 14:32:14 localhost systemd[1]: Started irqbalance daemon.
Jan 06 14:32:14 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 06 14:32:14 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 06 14:32:14 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 06 14:32:14 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 06 14:32:14 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 06 14:32:14 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 06 14:32:14 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 06 14:32:14 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 06 14:32:14 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 06 14:32:14 localhost systemd[1]: Starting User Login Management...
Jan 06 14:32:14 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 06 14:32:14 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 06 14:32:14 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 06 14:32:14 localhost chronyd[794]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 06 14:32:14 localhost chronyd[794]: Loaded 0 symmetric keys
Jan 06 14:32:14 localhost chronyd[794]: Using right/UTC timezone to obtain leap second data
Jan 06 14:32:14 localhost chronyd[794]: Loaded seccomp filter (level 2)
Jan 06 14:32:14 localhost systemd[1]: Started NTP client/server.
Jan 06 14:32:14 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 06 14:32:14 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 06 14:32:14 localhost systemd-logind[791]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 06 14:32:14 localhost systemd-logind[791]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 06 14:32:14 localhost systemd-logind[791]: New seat seat0.
Jan 06 14:32:14 localhost systemd[1]: Started User Login Management.
Jan 06 14:32:14 localhost kernel: kvm_amd: TSC scaling supported
Jan 06 14:32:14 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 06 14:32:14 localhost kernel: kvm_amd: Nested Paging enabled
Jan 06 14:32:14 localhost kernel: kvm_amd: LBR virtualization supported
Jan 06 14:32:14 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 06 14:32:14 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 06 14:32:14 localhost kernel: Console: switching to colour dummy device 80x25
Jan 06 14:32:14 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 06 14:32:14 localhost kernel: [drm] features: -context_init
Jan 06 14:32:14 localhost kernel: [drm] number of scanouts: 1
Jan 06 14:32:14 localhost kernel: [drm] number of cap sets: 0
Jan 06 14:32:14 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 06 14:32:14 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 06 14:32:14 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 06 14:32:14 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 06 14:32:14 localhost iptables.init[777]: iptables: Applying firewall rules: [  OK  ]
Jan 06 14:32:14 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 06 14:32:14 localhost cloud-init[838]: Cloud-init v. 24.4-8.el9 running 'init-local' at Tue, 06 Jan 2026 14:32:14 +0000. Up 6.59 seconds.
Jan 06 14:32:15 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 06 14:32:15 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 06 14:32:15 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpd3opwzfd.mount: Deactivated successfully.
Jan 06 14:32:15 localhost systemd[1]: Starting Hostname Service...
Jan 06 14:32:15 localhost systemd[1]: Started Hostname Service.
Jan 06 14:32:15 np0005575490.novalocal systemd-hostnamed[852]: Hostname set to <np0005575490.novalocal> (static)
Jan 06 14:32:15 np0005575490.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 06 14:32:15 np0005575490.novalocal systemd[1]: Reached target Preparation for Network.
Jan 06 14:32:15 np0005575490.novalocal systemd[1]: Starting Network Manager...
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4412] NetworkManager (version 1.54.2-1.el9) is starting... (boot:9618711b-fe4f-49ab-b47b-caab4b22688f)
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4417] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4530] manager[0x5646e7de7000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4573] hostname: hostname: using hostnamed
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4573] hostname: static hostname changed from (none) to "np0005575490.novalocal"
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4581] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4699] manager[0x5646e7de7000]: rfkill: Wi-Fi hardware radio set enabled
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4700] manager[0x5646e7de7000]: rfkill: WWAN hardware radio set enabled
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4763] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4764] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4765] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4766] manager: Networking is enabled by state file
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4769] settings: Loaded settings plugin: keyfile (internal)
Jan 06 14:32:15 np0005575490.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4784] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4819] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4836] dhcp: init: Using DHCP client 'internal'
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4842] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4863] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4875] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4889] device (lo): Activation: starting connection 'lo' (0d776732-e25f-45b4-9be2-41af4991938d)
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4904] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4909] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4949] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4955] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4959] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4962] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4965] device (eth0): carrier: link connected
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4970] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4979] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 06 14:32:15 np0005575490.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4990] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4996] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.4998] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.5002] manager: NetworkManager state is now CONNECTING
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.5004] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.5014] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.5019] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 06 14:32:15 np0005575490.novalocal systemd[1]: Started Network Manager.
Jan 06 14:32:15 np0005575490.novalocal systemd[1]: Reached target Network.
Jan 06 14:32:15 np0005575490.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 06 14:32:15 np0005575490.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 06 14:32:15 np0005575490.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.5223] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.5226] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 06 14:32:15 np0005575490.novalocal NetworkManager[856]: <info>  [1767709935.5236] device (lo): Activation: successful, device activated.
Jan 06 14:32:15 np0005575490.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 06 14:32:15 np0005575490.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 06 14:32:15 np0005575490.novalocal systemd[1]: Reached target NFS client services.
Jan 06 14:32:15 np0005575490.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 06 14:32:15 np0005575490.novalocal systemd[1]: Reached target Remote File Systems.
Jan 06 14:32:15 np0005575490.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 06 14:32:18 np0005575490.novalocal NetworkManager[856]: <info>  [1767709938.6242] dhcp4 (eth0): state changed new lease, address=38.102.83.193
Jan 06 14:32:18 np0005575490.novalocal NetworkManager[856]: <info>  [1767709938.6258] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 06 14:32:18 np0005575490.novalocal NetworkManager[856]: <info>  [1767709938.6285] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 06 14:32:18 np0005575490.novalocal NetworkManager[856]: <info>  [1767709938.6318] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 06 14:32:18 np0005575490.novalocal NetworkManager[856]: <info>  [1767709938.6319] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 06 14:32:18 np0005575490.novalocal NetworkManager[856]: <info>  [1767709938.6323] manager: NetworkManager state is now CONNECTED_SITE
Jan 06 14:32:18 np0005575490.novalocal NetworkManager[856]: <info>  [1767709938.6327] device (eth0): Activation: successful, device activated.
Jan 06 14:32:18 np0005575490.novalocal NetworkManager[856]: <info>  [1767709938.6333] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 06 14:32:18 np0005575490.novalocal NetworkManager[856]: <info>  [1767709938.6335] manager: startup complete
Jan 06 14:32:18 np0005575490.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 06 14:32:18 np0005575490.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: Cloud-init v. 24.4-8.el9 running 'init' at Tue, 06 Jan 2026 14:32:18 +0000. Up 10.67 seconds.
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: | Device |  Up  |           Address           |      Mask     | Scope  |     Hw-Address    |
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: |  eth0  | True |        38.102.83.193        | 255.255.255.0 | global | fa:16:3e:c9:07:d0 |
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fec9:7d0/64 |       .       |  link  | fa:16:3e:c9:07:d0 |
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1          |   255.0.0.0   |  host  |         .         |
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: |   lo   | True |           ::1/128           |       .       |  host  |         .         |
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 06 14:32:18 np0005575490.novalocal cloud-init[921]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Jan 06 14:32:19 np0005575490.novalocal cloud-init[921]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Jan 06 14:32:19 np0005575490.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 06 14:32:20 np0005575490.novalocal useradd[988]: new group: name=cloud-user, GID=1001
Jan 06 14:32:20 np0005575490.novalocal useradd[988]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 06 14:32:20 np0005575490.novalocal useradd[988]: add 'cloud-user' to group 'adm'
Jan 06 14:32:20 np0005575490.novalocal useradd[988]: add 'cloud-user' to group 'systemd-journal'
Jan 06 14:32:20 np0005575490.novalocal useradd[988]: add 'cloud-user' to shadow group 'adm'
Jan 06 14:32:20 np0005575490.novalocal useradd[988]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: Generating public/private rsa key pair.
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: The key fingerprint is:
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: SHA256:c1vMvaRCzf3WVAlUCE3n02K4pzXZL8gmKQ/r1yoy8g8 root@np0005575490.novalocal
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: The key's randomart image is:
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: +---[RSA 3072]----+
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |           .=+oo |
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |             ++ o|
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |            . o+o|
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |           = = +o|
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |        S o B O o|
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |         + = B =o|
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |      E o =.* o =|
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |    . o..=.+.  o |
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |     o.=+oo.     |
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: +----[SHA256]-----+
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: Generating public/private ecdsa key pair.
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: The key fingerprint is:
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: SHA256:OURI6UD7ngFGY15d4N/SuUo6+ZtjKc2avZRRfm9BPVc root@np0005575490.novalocal
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: The key's randomart image is:
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: +---[ECDSA 256]---+
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |   .=.o+oo.     E|
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |   +.+oo.       o|
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |    =o  o   .  oo|
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |   . o.. o = .. o|
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |      o S + = .. |
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |     . o . + o ..|
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |      o  ++..   o|
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |        +**o   . |
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |        +*B+     |
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: +----[SHA256]-----+
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: Generating public/private ed25519 key pair.
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: The key fingerprint is:
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: SHA256:1OIxdoqBidq750goUad4PeDXvKP7DXKB1bjNpFVCFIU root@np0005575490.novalocal
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: The key's randomart image is:
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: +--[ED25519 256]--+
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |       o=+o      |
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |   . o oE+       |
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |  + + + X o      |
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: | * = = & *       |
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |+ = = B S        |
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: | + o . o         |
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |o o . =          |
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |.. o.+ +         |
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: |  oo+o. .        |
Jan 06 14:32:20 np0005575490.novalocal cloud-init[921]: +----[SHA256]-----+
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Reached target Network is Online.
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Starting System Logging Service...
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 06 14:32:20 np0005575490.novalocal sm-notify[1004]: Version 2.5.4 starting
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Starting Permit User Sessions...
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 06 14:32:20 np0005575490.novalocal sshd[1006]: Server listening on 0.0.0.0 port 22.
Jan 06 14:32:20 np0005575490.novalocal sshd[1006]: Server listening on :: port 22.
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Finished Permit User Sessions.
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Started Command Scheduler.
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Started Getty on tty1.
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Reached target Login Prompts.
Jan 06 14:32:20 np0005575490.novalocal rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Jan 06 14:32:20 np0005575490.novalocal rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Started System Logging Service.
Jan 06 14:32:20 np0005575490.novalocal crond[1010]: (CRON) STARTUP (1.5.7)
Jan 06 14:32:20 np0005575490.novalocal crond[1010]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 06 14:32:20 np0005575490.novalocal crond[1010]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 9% if used.)
Jan 06 14:32:20 np0005575490.novalocal crond[1010]: (CRON) INFO (running with inotify support)
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Reached target Multi-User System.
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 06 14:32:20 np0005575490.novalocal sshd-session[1019]: Unable to negotiate with 38.102.83.114 port 56638: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 06 14:32:20 np0005575490.novalocal sshd-session[1038]: Unable to negotiate with 38.102.83.114 port 56656: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 06 14:32:20 np0005575490.novalocal rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 06 14:32:20 np0005575490.novalocal sshd-session[1051]: Unable to negotiate with 38.102.83.114 port 56660: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 06 14:32:20 np0005575490.novalocal sshd-session[1009]: Connection closed by 38.102.83.114 port 56628 [preauth]
Jan 06 14:32:20 np0005575490.novalocal sshd-session[1063]: Connection reset by 38.102.83.114 port 56672 [preauth]
Jan 06 14:32:20 np0005575490.novalocal sshd-session[1074]: Connection reset by 38.102.83.114 port 56688 [preauth]
Jan 06 14:32:20 np0005575490.novalocal kdumpctl[1021]: kdump: No kdump initial ramdisk found.
Jan 06 14:32:20 np0005575490.novalocal kdumpctl[1021]: kdump: Rebuilding /boot/initramfs-5.14.0-655.el9.x86_64kdump.img
Jan 06 14:32:20 np0005575490.novalocal sshd-session[1081]: Unable to negotiate with 38.102.83.114 port 56704: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 06 14:32:20 np0005575490.novalocal sshd-session[1028]: Connection closed by 38.102.83.114 port 56646 [preauth]
Jan 06 14:32:20 np0005575490.novalocal sshd-session[1085]: Unable to negotiate with 38.102.83.114 port 56706: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 06 14:32:20 np0005575490.novalocal cloud-init[1150]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Tue, 06 Jan 2026 14:32:20 +0000. Up 12.61 seconds.
Jan 06 14:32:20 np0005575490.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 06 14:32:21 np0005575490.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 06 14:32:21 np0005575490.novalocal dracut[1283]: dracut-057-102.git20250818.el9
Jan 06 14:32:21 np0005575490.novalocal cloud-init[1301]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Tue, 06 Jan 2026 14:32:21 +0000. Up 13.03 seconds.
Jan 06 14:32:21 np0005575490.novalocal cloud-init[1303]: #############################################################
Jan 06 14:32:21 np0005575490.novalocal cloud-init[1304]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 06 14:32:21 np0005575490.novalocal cloud-init[1308]: 256 SHA256:OURI6UD7ngFGY15d4N/SuUo6+ZtjKc2avZRRfm9BPVc root@np0005575490.novalocal (ECDSA)
Jan 06 14:32:21 np0005575490.novalocal cloud-init[1317]: 256 SHA256:1OIxdoqBidq750goUad4PeDXvKP7DXKB1bjNpFVCFIU root@np0005575490.novalocal (ED25519)
Jan 06 14:32:21 np0005575490.novalocal dracut[1285]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/f2a0a5c1-133f-4977-b837-e40b31cbd9cc /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-655.el9.x86_64kdump.img 5.14.0-655.el9.x86_64
Jan 06 14:32:21 np0005575490.novalocal cloud-init[1323]: 3072 SHA256:c1vMvaRCzf3WVAlUCE3n02K4pzXZL8gmKQ/r1yoy8g8 root@np0005575490.novalocal (RSA)
Jan 06 14:32:21 np0005575490.novalocal cloud-init[1327]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 06 14:32:21 np0005575490.novalocal cloud-init[1329]: #############################################################
Jan 06 14:32:21 np0005575490.novalocal cloud-init[1301]: Cloud-init v. 24.4-8.el9 finished at Tue, 06 Jan 2026 14:32:21 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 13.21 seconds
Jan 06 14:32:21 np0005575490.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 06 14:32:21 np0005575490.novalocal systemd[1]: Reached target Cloud-init target.
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: memstrack is not available
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 06 14:32:22 np0005575490.novalocal dracut[1285]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 06 14:32:23 np0005575490.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 06 14:32:23 np0005575490.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 06 14:32:23 np0005575490.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 06 14:32:23 np0005575490.novalocal dracut[1285]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 06 14:32:23 np0005575490.novalocal dracut[1285]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 06 14:32:23 np0005575490.novalocal dracut[1285]: memstrack is not available
Jan 06 14:32:23 np0005575490.novalocal dracut[1285]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 06 14:32:23 np0005575490.novalocal dracut[1285]: *** Including module: systemd ***
Jan 06 14:32:23 np0005575490.novalocal chronyd[794]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Jan 06 14:32:23 np0005575490.novalocal chronyd[794]: System clock TAI offset set to 37 seconds
Jan 06 14:32:23 np0005575490.novalocal dracut[1285]: *** Including module: fips ***
Jan 06 14:32:24 np0005575490.novalocal dracut[1285]: *** Including module: systemd-initrd ***
Jan 06 14:32:24 np0005575490.novalocal dracut[1285]: *** Including module: i18n ***
Jan 06 14:32:24 np0005575490.novalocal dracut[1285]: *** Including module: drm ***
Jan 06 14:32:24 np0005575490.novalocal dracut[1285]: *** Including module: prefixdevname ***
Jan 06 14:32:24 np0005575490.novalocal dracut[1285]: *** Including module: kernel-modules ***
Jan 06 14:32:24 np0005575490.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 06 14:32:25 np0005575490.novalocal irqbalance[779]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 06 14:32:25 np0005575490.novalocal irqbalance[779]: IRQ 25 affinity is now unmanaged
Jan 06 14:32:25 np0005575490.novalocal irqbalance[779]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 06 14:32:25 np0005575490.novalocal irqbalance[779]: IRQ 31 affinity is now unmanaged
Jan 06 14:32:25 np0005575490.novalocal irqbalance[779]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 06 14:32:25 np0005575490.novalocal irqbalance[779]: IRQ 28 affinity is now unmanaged
Jan 06 14:32:25 np0005575490.novalocal irqbalance[779]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 06 14:32:25 np0005575490.novalocal irqbalance[779]: IRQ 32 affinity is now unmanaged
Jan 06 14:32:25 np0005575490.novalocal irqbalance[779]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 06 14:32:25 np0005575490.novalocal irqbalance[779]: IRQ 30 affinity is now unmanaged
Jan 06 14:32:25 np0005575490.novalocal irqbalance[779]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 06 14:32:25 np0005575490.novalocal irqbalance[779]: IRQ 29 affinity is now unmanaged
Jan 06 14:32:25 np0005575490.novalocal dracut[1285]: *** Including module: kernel-modules-extra ***
Jan 06 14:32:25 np0005575490.novalocal dracut[1285]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 06 14:32:25 np0005575490.novalocal dracut[1285]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 06 14:32:25 np0005575490.novalocal dracut[1285]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 06 14:32:25 np0005575490.novalocal dracut[1285]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 06 14:32:25 np0005575490.novalocal dracut[1285]: *** Including module: qemu ***
Jan 06 14:32:25 np0005575490.novalocal dracut[1285]: *** Including module: fstab-sys ***
Jan 06 14:32:25 np0005575490.novalocal dracut[1285]: *** Including module: rootfs-block ***
Jan 06 14:32:25 np0005575490.novalocal dracut[1285]: *** Including module: terminfo ***
Jan 06 14:32:25 np0005575490.novalocal systemd[1]: serial-getty@ttyS0.service: Deactivated successfully.
Jan 06 14:32:25 np0005575490.novalocal dracut[1285]: *** Including module: udev-rules ***
Jan 06 14:32:25 np0005575490.novalocal systemd[1]: serial-getty@ttyS0.service: Scheduled restart job, restart counter is at 1.
Jan 06 14:32:25 np0005575490.novalocal systemd[1]: Stopped Serial Getty on ttyS0.
Jan 06 14:32:25 np0005575490.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 06 14:32:26 np0005575490.novalocal dracut[1285]: Skipping udev rule: 91-permissions.rules
Jan 06 14:32:26 np0005575490.novalocal dracut[1285]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 06 14:32:26 np0005575490.novalocal dracut[1285]: *** Including module: virtiofs ***
Jan 06 14:32:26 np0005575490.novalocal dracut[1285]: *** Including module: dracut-systemd ***
Jan 06 14:32:26 np0005575490.novalocal dracut[1285]: *** Including module: usrmount ***
Jan 06 14:32:26 np0005575490.novalocal dracut[1285]: *** Including module: base ***
Jan 06 14:32:26 np0005575490.novalocal dracut[1285]: *** Including module: fs-lib ***
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]: *** Including module: kdumpbase ***
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]:   microcode_ctl module: mangling fw_dir
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]:     microcode_ctl: configuration "intel" is ignored
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 06 14:32:27 np0005575490.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 06 14:32:28 np0005575490.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 06 14:32:28 np0005575490.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 06 14:32:28 np0005575490.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 06 14:32:28 np0005575490.novalocal dracut[1285]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 06 14:32:28 np0005575490.novalocal dracut[1285]: *** Including module: openssl ***
Jan 06 14:32:28 np0005575490.novalocal dracut[1285]: *** Including module: shutdown ***
Jan 06 14:32:28 np0005575490.novalocal dracut[1285]: *** Including module: squash ***
Jan 06 14:32:28 np0005575490.novalocal dracut[1285]: *** Including modules done ***
Jan 06 14:32:28 np0005575490.novalocal dracut[1285]: *** Installing kernel module dependencies ***
Jan 06 14:32:28 np0005575490.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 06 14:32:29 np0005575490.novalocal dracut[1285]: *** Installing kernel module dependencies done ***
Jan 06 14:32:29 np0005575490.novalocal dracut[1285]: *** Resolving executable dependencies ***
Jan 06 14:32:30 np0005575490.novalocal dracut[1285]: *** Resolving executable dependencies done ***
Jan 06 14:32:30 np0005575490.novalocal dracut[1285]: *** Generating early-microcode cpio image ***
Jan 06 14:32:31 np0005575490.novalocal dracut[1285]: *** Store current command line parameters ***
Jan 06 14:32:31 np0005575490.novalocal dracut[1285]: Stored kernel commandline:
Jan 06 14:32:31 np0005575490.novalocal dracut[1285]: No dracut internal kernel commandline stored in the initramfs
Jan 06 14:32:31 np0005575490.novalocal dracut[1285]: *** Install squash loader ***
Jan 06 14:32:32 np0005575490.novalocal dracut[1285]: *** Squashing the files inside the initramfs ***
Jan 06 14:32:33 np0005575490.novalocal dracut[1285]: *** Squashing the files inside the initramfs done ***
Jan 06 14:32:33 np0005575490.novalocal dracut[1285]: *** Creating image file '/boot/initramfs-5.14.0-655.el9.x86_64kdump.img' ***
Jan 06 14:32:33 np0005575490.novalocal dracut[1285]: *** Hardlinking files ***
Jan 06 14:32:33 np0005575490.novalocal dracut[1285]: Mode:           real
Jan 06 14:32:33 np0005575490.novalocal dracut[1285]: Files:          50
Jan 06 14:32:33 np0005575490.novalocal dracut[1285]: Linked:         0 files
Jan 06 14:32:33 np0005575490.novalocal dracut[1285]: Compared:       0 xattrs
Jan 06 14:32:33 np0005575490.novalocal dracut[1285]: Compared:       0 files
Jan 06 14:32:33 np0005575490.novalocal dracut[1285]: Saved:          0 B
Jan 06 14:32:33 np0005575490.novalocal dracut[1285]: Duration:       0.000859 seconds
Jan 06 14:32:33 np0005575490.novalocal dracut[1285]: *** Hardlinking files done ***
Jan 06 14:32:33 np0005575490.novalocal dracut[1285]: *** Creating initramfs image file '/boot/initramfs-5.14.0-655.el9.x86_64kdump.img' done ***
Jan 06 14:32:34 np0005575490.novalocal kdumpctl[1021]: kdump: kexec: loaded kdump kernel
Jan 06 14:32:34 np0005575490.novalocal kdumpctl[1021]: kdump: Starting kdump: [OK]
Jan 06 14:32:34 np0005575490.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 06 14:32:34 np0005575490.novalocal systemd[1]: Startup finished in 1.753s (kernel) + 2.848s (initrd) + 21.406s (userspace) = 26.007s.
Jan 06 14:32:36 np0005575490.novalocal sshd-session[4295]: Accepted publickey for zuul from 38.102.83.114 port 56046 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 06 14:32:36 np0005575490.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 06 14:32:36 np0005575490.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 06 14:32:36 np0005575490.novalocal systemd-logind[791]: New session 1 of user zuul.
Jan 06 14:32:36 np0005575490.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 06 14:32:36 np0005575490.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 06 14:32:36 np0005575490.novalocal systemd[4299]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 14:32:36 np0005575490.novalocal systemd[4299]: Queued start job for default target Main User Target.
Jan 06 14:32:36 np0005575490.novalocal systemd[4299]: Created slice User Application Slice.
Jan 06 14:32:36 np0005575490.novalocal systemd[4299]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 06 14:32:36 np0005575490.novalocal systemd[4299]: Started Daily Cleanup of User's Temporary Directories.
Jan 06 14:32:36 np0005575490.novalocal systemd[4299]: Reached target Paths.
Jan 06 14:32:36 np0005575490.novalocal systemd[4299]: Reached target Timers.
Jan 06 14:32:36 np0005575490.novalocal systemd[4299]: Starting D-Bus User Message Bus Socket...
Jan 06 14:32:36 np0005575490.novalocal systemd[4299]: Starting Create User's Volatile Files and Directories...
Jan 06 14:32:36 np0005575490.novalocal systemd[4299]: Finished Create User's Volatile Files and Directories.
Jan 06 14:32:36 np0005575490.novalocal systemd[4299]: Listening on D-Bus User Message Bus Socket.
Jan 06 14:32:36 np0005575490.novalocal systemd[4299]: Reached target Sockets.
Jan 06 14:32:36 np0005575490.novalocal systemd[4299]: Reached target Basic System.
Jan 06 14:32:36 np0005575490.novalocal systemd[4299]: Reached target Main User Target.
Jan 06 14:32:36 np0005575490.novalocal systemd[4299]: Startup finished in 166ms.
Jan 06 14:32:36 np0005575490.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 06 14:32:36 np0005575490.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 06 14:32:36 np0005575490.novalocal sshd-session[4295]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 14:32:37 np0005575490.novalocal python3[4381]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 14:32:39 np0005575490.novalocal python3[4409]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 14:32:45 np0005575490.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 06 14:32:46 np0005575490.novalocal python3[4469]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 14:32:47 np0005575490.novalocal python3[4509]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 06 14:32:49 np0005575490.novalocal python3[4535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwZeqdXglNVmz7VzQ2jCtEQpOOj9sINRTLoT6bmbL/JrDiGLioJ6HMF56lUPUJ7ZydCW0m1BuqWbV/2iry4Mv7UUbfiEauIWu0Ul+r8Z4IuDhtCMsKbZ0cyUuBi5kSQfeIvGZgSREOHQ6PIKnUYbXPJEomVnTprRW/bujETiACdtbfkCtnvNtaSNTfBziBGLXYW5Tsb1QwF9UGCvstoFIEYaaFc0X591oyjPFyPo/eyxyy6arNJHeRQXnb65t2RXcXvBaKz3jw+Xsvk/ueAp3a5OWts47ObDy14cIlp1OlM2CfyBTMxYlofNiF7yUVu37VbE//bW21VJ2nWcPGj47fgHRSrcbzIHBVlJkvUcrwHETAxQ4ifgq5oin07D8dnZM3sa+C6PBHG69K9BIHJV8eJIxKEofmGg1UskQ8obvs/515pkBIf7sg9Xc1o8Yl4dxCe6VLEssCDG5ky9hiojlbRt2g9lTM5a9K5LE00fQdTnhp7cxbc78d8ZptBzr48qc= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:32:49 np0005575490.novalocal python3[4559]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:32:50 np0005575490.novalocal python3[4658]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 06 14:32:50 np0005575490.novalocal python3[4729]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767709969.651622-207-117692691418499/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=06078a678e914ea2a807612290424eef_id_rsa follow=False checksum=8c7af324516ff910b9296ce75edc28c1e722a3c3 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:32:50 np0005575490.novalocal python3[4852]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 06 14:32:51 np0005575490.novalocal python3[4923]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767709970.6632862-240-16762076950382/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=06078a678e914ea2a807612290424eef_id_rsa.pub follow=False checksum=2866030ff00d5bafb6b869c41352882da9c3a268 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:32:52 np0005575490.novalocal python3[4971]: ansible-ping Invoked with data=pong
Jan 06 14:32:53 np0005575490.novalocal python3[4995]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 14:32:55 np0005575490.novalocal python3[5053]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 06 14:32:56 np0005575490.novalocal python3[5085]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:32:56 np0005575490.novalocal python3[5109]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:32:56 np0005575490.novalocal python3[5133]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:32:57 np0005575490.novalocal python3[5157]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:32:57 np0005575490.novalocal python3[5181]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:32:57 np0005575490.novalocal python3[5205]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:32:58 np0005575490.novalocal sudo[5229]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhkaxsgvdrwhvefveiixwcwkhjmvfcst ; /usr/bin/python3'
Jan 06 14:32:58 np0005575490.novalocal sudo[5229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:32:59 np0005575490.novalocal python3[5231]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:32:59 np0005575490.novalocal sudo[5229]: pam_unix(sudo:session): session closed for user root
Jan 06 14:32:59 np0005575490.novalocal sudo[5307]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brmxgayrmvxsrfpcjczftyrbsyfrimro ; /usr/bin/python3'
Jan 06 14:32:59 np0005575490.novalocal sudo[5307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:32:59 np0005575490.novalocal python3[5309]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 06 14:32:59 np0005575490.novalocal sudo[5307]: pam_unix(sudo:session): session closed for user root
Jan 06 14:33:00 np0005575490.novalocal sudo[5380]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkpzccwfpvnwrilstufbdrlrmhgytmqn ; /usr/bin/python3'
Jan 06 14:33:00 np0005575490.novalocal sudo[5380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:33:00 np0005575490.novalocal python3[5382]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1767709979.30574-21-278717893575036/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:33:00 np0005575490.novalocal sudo[5380]: pam_unix(sudo:session): session closed for user root
Jan 06 14:33:00 np0005575490.novalocal python3[5430]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:01 np0005575490.novalocal python3[5454]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:01 np0005575490.novalocal python3[5478]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:01 np0005575490.novalocal python3[5502]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:02 np0005575490.novalocal python3[5526]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:02 np0005575490.novalocal python3[5550]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:02 np0005575490.novalocal python3[5574]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:02 np0005575490.novalocal python3[5598]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:03 np0005575490.novalocal python3[5622]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:03 np0005575490.novalocal python3[5646]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:03 np0005575490.novalocal python3[5670]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:04 np0005575490.novalocal python3[5694]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:04 np0005575490.novalocal python3[5718]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:04 np0005575490.novalocal python3[5742]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:05 np0005575490.novalocal python3[5766]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:05 np0005575490.novalocal python3[5790]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:05 np0005575490.novalocal python3[5814]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:05 np0005575490.novalocal python3[5838]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:06 np0005575490.novalocal python3[5862]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:06 np0005575490.novalocal python3[5886]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:06 np0005575490.novalocal python3[5910]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:07 np0005575490.novalocal python3[5934]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:07 np0005575490.novalocal python3[5958]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:07 np0005575490.novalocal python3[5982]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:08 np0005575490.novalocal python3[6006]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:08 np0005575490.novalocal python3[6030]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:33:10 np0005575490.novalocal sudo[6054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egeugpyzdegtosbasmxnivryupusclac ; /usr/bin/python3'
Jan 06 14:33:10 np0005575490.novalocal sudo[6054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:33:10 np0005575490.novalocal python3[6056]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 06 14:33:10 np0005575490.novalocal systemd[1]: Starting Time & Date Service...
Jan 06 14:33:11 np0005575490.novalocal systemd[1]: Started Time & Date Service.
Jan 06 14:33:12 np0005575490.novalocal systemd-timedated[6058]: Changed time zone to 'UTC' (UTC).
Jan 06 14:33:12 np0005575490.novalocal sudo[6054]: pam_unix(sudo:session): session closed for user root
Jan 06 14:33:12 np0005575490.novalocal sudo[6085]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdborgpuaxxaewtrayeebyjlxiaunaaa ; /usr/bin/python3'
Jan 06 14:33:12 np0005575490.novalocal sudo[6085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:33:12 np0005575490.novalocal python3[6087]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:33:12 np0005575490.novalocal sudo[6085]: pam_unix(sudo:session): session closed for user root
Jan 06 14:33:12 np0005575490.novalocal python3[6163]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 06 14:33:13 np0005575490.novalocal python3[6234]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1767709992.6124315-153-191718205072477/source _original_basename=tmpovnn8zzd follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:33:13 np0005575490.novalocal python3[6334]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 06 14:33:14 np0005575490.novalocal python3[6405]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1767709993.4813192-183-12212503828773/source _original_basename=tmpvmd_lz8k follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:33:15 np0005575490.novalocal sudo[6505]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfhvpmkytbtxoqgkhnglpjflwwffcsko ; /usr/bin/python3'
Jan 06 14:33:15 np0005575490.novalocal sudo[6505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:33:15 np0005575490.novalocal python3[6507]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 06 14:33:15 np0005575490.novalocal sudo[6505]: pam_unix(sudo:session): session closed for user root
Jan 06 14:33:15 np0005575490.novalocal sudo[6578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnchpqbcemmvjawyulygernhdgufjply ; /usr/bin/python3'
Jan 06 14:33:15 np0005575490.novalocal sudo[6578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:33:15 np0005575490.novalocal python3[6580]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1767709994.9299839-231-139024051120284/source _original_basename=tmp6sijylx2 follow=False checksum=cd982ffd608592e8f819f22d6376b4402103f855 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:33:15 np0005575490.novalocal sudo[6578]: pam_unix(sudo:session): session closed for user root
Jan 06 14:33:16 np0005575490.novalocal python3[6628]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 14:33:16 np0005575490.novalocal python3[6654]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 14:33:16 np0005575490.novalocal sudo[6732]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yivzghdbpnyifvlztpphapriburiucwg ; /usr/bin/python3'
Jan 06 14:33:16 np0005575490.novalocal sudo[6732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:33:17 np0005575490.novalocal python3[6734]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 06 14:33:17 np0005575490.novalocal sudo[6732]: pam_unix(sudo:session): session closed for user root
Jan 06 14:33:17 np0005575490.novalocal sudo[6805]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohxehijaamjgkwkujbuqlsojgwvduwum ; /usr/bin/python3'
Jan 06 14:33:17 np0005575490.novalocal sudo[6805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:33:17 np0005575490.novalocal python3[6807]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1767709996.7159076-273-9317368672444/source _original_basename=tmptrb5s6ms follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:33:17 np0005575490.novalocal sudo[6805]: pam_unix(sudo:session): session closed for user root
Jan 06 14:33:17 np0005575490.novalocal sudo[6856]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pypqmnybskajthqtbywvvaqbovzrvfdv ; /usr/bin/python3'
Jan 06 14:33:17 np0005575490.novalocal sudo[6856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:33:18 np0005575490.novalocal python3[6858]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-c4da-ea4a-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 14:33:18 np0005575490.novalocal sudo[6856]: pam_unix(sudo:session): session closed for user root
Jan 06 14:33:18 np0005575490.novalocal python3[6886]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-c4da-ea4a-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 06 14:33:19 np0005575490.novalocal python3[6914]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:33:35 np0005575490.novalocal sudo[6938]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvuqzkgdlcxvueyxhnyxscxzsewldanz ; /usr/bin/python3'
Jan 06 14:33:35 np0005575490.novalocal sudo[6938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:33:36 np0005575490.novalocal python3[6940]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:33:36 np0005575490.novalocal sudo[6938]: pam_unix(sudo:session): session closed for user root
Jan 06 14:33:42 np0005575490.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 06 14:34:11 np0005575490.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 06 14:34:11 np0005575490.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 06 14:34:11 np0005575490.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 06 14:34:11 np0005575490.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 06 14:34:11 np0005575490.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 06 14:34:11 np0005575490.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 06 14:34:11 np0005575490.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 06 14:34:11 np0005575490.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 06 14:34:11 np0005575490.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 06 14:34:11 np0005575490.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 06 14:34:11 np0005575490.novalocal NetworkManager[856]: <info>  [1767710051.3035] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 06 14:34:11 np0005575490.novalocal systemd-udevd[6944]: Network interface NamePolicy= disabled on kernel command line.
Jan 06 14:34:11 np0005575490.novalocal NetworkManager[856]: <info>  [1767710051.3315] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 06 14:34:11 np0005575490.novalocal NetworkManager[856]: <info>  [1767710051.3360] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 06 14:34:11 np0005575490.novalocal NetworkManager[856]: <info>  [1767710051.3369] device (eth1): carrier: link connected
Jan 06 14:34:11 np0005575490.novalocal NetworkManager[856]: <info>  [1767710051.3373] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 06 14:34:11 np0005575490.novalocal NetworkManager[856]: <info>  [1767710051.3385] policy: auto-activating connection 'Wired connection 1' (0612abbc-cf75-33f1-b6b4-e54eeee8ddc7)
Jan 06 14:34:11 np0005575490.novalocal NetworkManager[856]: <info>  [1767710051.3390] device (eth1): Activation: starting connection 'Wired connection 1' (0612abbc-cf75-33f1-b6b4-e54eeee8ddc7)
Jan 06 14:34:11 np0005575490.novalocal NetworkManager[856]: <info>  [1767710051.3393] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 06 14:34:11 np0005575490.novalocal NetworkManager[856]: <info>  [1767710051.3401] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 06 14:34:11 np0005575490.novalocal NetworkManager[856]: <info>  [1767710051.3409] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 06 14:34:11 np0005575490.novalocal NetworkManager[856]: <info>  [1767710051.3419] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 06 14:34:11 np0005575490.novalocal python3[6970]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-bb70-3823-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 14:34:18 np0005575490.novalocal sudo[7048]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krsfefjtushsqsznzmluicrfgfzzddfe ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 06 14:34:18 np0005575490.novalocal sudo[7048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:34:18 np0005575490.novalocal python3[7050]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 06 14:34:19 np0005575490.novalocal sudo[7048]: pam_unix(sudo:session): session closed for user root
Jan 06 14:34:19 np0005575490.novalocal sudo[7121]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlzxjmoeqrpqdsegrnxnysvjlqfoojlt ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 06 14:34:19 np0005575490.novalocal sudo[7121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:34:19 np0005575490.novalocal python3[7123]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767710058.6508574-102-177905979831209/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=1c8ed226c5ee80ad147a2e34404b549f06fb07c0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:34:19 np0005575490.novalocal sudo[7121]: pam_unix(sudo:session): session closed for user root
Jan 06 14:34:19 np0005575490.novalocal sudo[7171]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odowlkwxugpblxeiecepzgrwmtghkxcy ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 06 14:34:19 np0005575490.novalocal sudo[7171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:34:20 np0005575490.novalocal python3[7173]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 14:34:20 np0005575490.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 06 14:34:20 np0005575490.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 06 14:34:20 np0005575490.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[856]: <info>  [1767710060.2498] caught SIGTERM, shutting down normally.
Jan 06 14:34:20 np0005575490.novalocal systemd[1]: Stopping Network Manager...
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[856]: <info>  [1767710060.2507] dhcp4 (eth0): canceled DHCP transaction
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[856]: <info>  [1767710060.2508] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[856]: <info>  [1767710060.2508] dhcp4 (eth0): state changed no lease
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[856]: <info>  [1767710060.2510] manager: NetworkManager state is now CONNECTING
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[856]: <info>  [1767710060.2611] dhcp4 (eth1): canceled DHCP transaction
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[856]: <info>  [1767710060.2612] dhcp4 (eth1): state changed no lease
Jan 06 14:34:20 np0005575490.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 06 14:34:20 np0005575490.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[856]: <info>  [1767710060.6725] exiting (success)
Jan 06 14:34:20 np0005575490.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 06 14:34:20 np0005575490.novalocal systemd[1]: Stopped Network Manager.
Jan 06 14:34:20 np0005575490.novalocal systemd[1]: NetworkManager.service: Consumed 1.112s CPU time, 10.0M memory peak.
Jan 06 14:34:20 np0005575490.novalocal systemd[1]: Starting Network Manager...
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.7495] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:9618711b-fe4f-49ab-b47b-caab4b22688f)
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.7496] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.7560] manager[0x5570e235d000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 06 14:34:20 np0005575490.novalocal systemd[1]: Starting Hostname Service...
Jan 06 14:34:20 np0005575490.novalocal systemd[1]: Started Hostname Service.
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8639] hostname: hostname: using hostnamed
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8642] hostname: static hostname changed from (none) to "np0005575490.novalocal"
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8650] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8656] manager[0x5570e235d000]: rfkill: Wi-Fi hardware radio set enabled
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8656] manager[0x5570e235d000]: rfkill: WWAN hardware radio set enabled
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8699] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8700] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8701] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8702] manager: Networking is enabled by state file
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8705] settings: Loaded settings plugin: keyfile (internal)
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8710] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8746] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8760] dhcp: init: Using DHCP client 'internal'
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8764] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8771] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8778] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8789] device (lo): Activation: starting connection 'lo' (0d776732-e25f-45b4-9be2-41af4991938d)
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8799] device (eth0): carrier: link connected
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8805] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8812] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8813] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8821] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8830] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8838] device (eth1): carrier: link connected
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8845] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8852] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (0612abbc-cf75-33f1-b6b4-e54eeee8ddc7) (indicated)
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8852] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8859] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8868] device (eth1): Activation: starting connection 'Wired connection 1' (0612abbc-cf75-33f1-b6b4-e54eeee8ddc7)
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8875] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 06 14:34:20 np0005575490.novalocal systemd[1]: Started Network Manager.
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8881] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8886] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8888] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8892] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8895] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8898] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8903] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8906] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8915] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8919] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8931] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8935] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8957] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8964] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 06 14:34:20 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710060.8971] device (lo): Activation: successful, device activated.
Jan 06 14:34:20 np0005575490.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 06 14:34:20 np0005575490.novalocal sudo[7171]: pam_unix(sudo:session): session closed for user root
Jan 06 14:34:21 np0005575490.novalocal python3[7238]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-bb70-3823-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 14:34:21 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710061.5363] dhcp4 (eth0): state changed new lease, address=38.102.83.193
Jan 06 14:34:21 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710061.5377] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 06 14:34:21 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710061.5521] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 06 14:34:21 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710061.5552] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 06 14:34:21 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710061.5554] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 06 14:34:21 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710061.5556] manager: NetworkManager state is now CONNECTED_SITE
Jan 06 14:34:21 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710061.5558] device (eth0): Activation: successful, device activated.
Jan 06 14:34:21 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710061.5561] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 06 14:34:31 np0005575490.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 06 14:34:40 np0005575490.novalocal sshd-session[7260]: error: maximum authentication attempts exceeded for root from 113.162.9.244 port 52102 ssh2 [preauth]
Jan 06 14:34:40 np0005575490.novalocal sshd-session[7260]: Disconnecting authenticating user root 113.162.9.244 port 52102: Too many authentication failures [preauth]
Jan 06 14:34:50 np0005575490.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 06 14:34:51 np0005575490.novalocal sshd-session[7262]: error: maximum authentication attempts exceeded for root from 113.162.9.244 port 52241 ssh2 [preauth]
Jan 06 14:34:51 np0005575490.novalocal sshd-session[7262]: Disconnecting authenticating user root 113.162.9.244 port 52241: Too many authentication failures [preauth]
Jan 06 14:35:00 np0005575490.novalocal sshd-session[7266]: error: maximum authentication attempts exceeded for root from 113.162.9.244 port 52386 ssh2 [preauth]
Jan 06 14:35:00 np0005575490.novalocal sshd-session[7266]: Disconnecting authenticating user root 113.162.9.244 port 52386: Too many authentication failures [preauth]
Jan 06 14:35:06 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710106.2656] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 06 14:35:06 np0005575490.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 06 14:35:06 np0005575490.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 06 14:35:06 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710106.3098] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 06 14:35:06 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710106.3108] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 06 14:35:06 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710106.3135] device (eth1): Activation: successful, device activated.
Jan 06 14:35:06 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710106.3150] manager: startup complete
Jan 06 14:35:06 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710106.3170] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 06 14:35:06 np0005575490.novalocal NetworkManager[7190]: <warn>  [1767710106.3178] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 06 14:35:06 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710106.3189] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 06 14:35:06 np0005575490.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 06 14:35:06 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710106.3301] dhcp4 (eth1): canceled DHCP transaction
Jan 06 14:35:06 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710106.3302] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 06 14:35:06 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710106.3303] dhcp4 (eth1): state changed no lease
Jan 06 14:35:06 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710106.3322] policy: auto-activating connection 'ci-private-network' (ff0b2ac9-e6ee-550e-a6e2-60d885b28b26)
Jan 06 14:35:06 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710106.3331] device (eth1): Activation: starting connection 'ci-private-network' (ff0b2ac9-e6ee-550e-a6e2-60d885b28b26)
Jan 06 14:35:06 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710106.3333] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 06 14:35:06 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710106.3336] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 06 14:35:06 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710106.3345] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 06 14:35:06 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710106.3357] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 06 14:35:06 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710106.3428] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 06 14:35:06 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710106.3430] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 06 14:35:06 np0005575490.novalocal NetworkManager[7190]: <info>  [1767710106.3437] device (eth1): Activation: successful, device activated.
Jan 06 14:35:06 np0005575490.novalocal systemd[4299]: Starting Mark boot as successful...
Jan 06 14:35:06 np0005575490.novalocal systemd[4299]: Finished Mark boot as successful.
Jan 06 14:35:09 np0005575490.novalocal sshd-session[7268]: Received disconnect from 113.162.9.244 port 52522:11: disconnected by user [preauth]
Jan 06 14:35:09 np0005575490.novalocal sshd-session[7268]: Disconnected from authenticating user root 113.162.9.244 port 52522 [preauth]
Jan 06 14:35:15 np0005575490.novalocal sshd-session[7294]: Invalid user admin from 113.162.9.244 port 52642
Jan 06 14:35:16 np0005575490.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 06 14:35:17 np0005575490.novalocal sshd-session[7294]: error: maximum authentication attempts exceeded for invalid user admin from 113.162.9.244 port 52642 ssh2 [preauth]
Jan 06 14:35:17 np0005575490.novalocal sshd-session[7294]: Disconnecting invalid user admin 113.162.9.244 port 52642: Too many authentication failures [preauth]
Jan 06 14:35:21 np0005575490.novalocal sshd-session[4308]: Received disconnect from 38.102.83.114 port 56046:11: disconnected by user
Jan 06 14:35:21 np0005575490.novalocal sshd-session[4308]: Disconnected from user zuul 38.102.83.114 port 56046
Jan 06 14:35:21 np0005575490.novalocal sshd-session[4295]: pam_unix(sshd:session): session closed for user zuul
Jan 06 14:35:21 np0005575490.novalocal systemd-logind[791]: Session 1 logged out. Waiting for processes to exit.
Jan 06 14:35:21 np0005575490.novalocal sshd-session[7298]: Accepted publickey for zuul from 38.102.83.114 port 40868 ssh2: RSA SHA256:/tsYtTPHPswvCHUyDjuXJcnXXQRlaCz6QYAgaouSN5U
Jan 06 14:35:21 np0005575490.novalocal systemd-logind[791]: New session 3 of user zuul.
Jan 06 14:35:21 np0005575490.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 06 14:35:21 np0005575490.novalocal sshd-session[7298]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 14:35:21 np0005575490.novalocal sudo[7377]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hifwvrcacbyzjhesrncdhoswkvqkouwo ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 06 14:35:21 np0005575490.novalocal sudo[7377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:35:21 np0005575490.novalocal python3[7379]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 06 14:35:21 np0005575490.novalocal sudo[7377]: pam_unix(sudo:session): session closed for user root
Jan 06 14:35:21 np0005575490.novalocal sudo[7450]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewfjckvhswaxclpsqzpujhtczgnpjcuz ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 06 14:35:21 np0005575490.novalocal sudo[7450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:35:22 np0005575490.novalocal python3[7452]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767710121.3170414-259-72691846363840/source _original_basename=tmpppwj6oyk follow=False checksum=1a63974fd90447cb955678a19d75e4947b664eba backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:35:22 np0005575490.novalocal sudo[7450]: pam_unix(sudo:session): session closed for user root
Jan 06 14:35:24 np0005575490.novalocal sshd-session[7301]: Connection closed by 38.102.83.114 port 40868
Jan 06 14:35:24 np0005575490.novalocal sshd-session[7298]: pam_unix(sshd:session): session closed for user zuul
Jan 06 14:35:24 np0005575490.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 06 14:35:24 np0005575490.novalocal systemd-logind[791]: Session 3 logged out. Waiting for processes to exit.
Jan 06 14:35:24 np0005575490.novalocal systemd-logind[791]: Removed session 3.
Jan 06 14:35:25 np0005575490.novalocal sshd-session[7296]: Invalid user admin from 113.162.9.244 port 52769
Jan 06 14:35:26 np0005575490.novalocal sshd-session[7296]: error: maximum authentication attempts exceeded for invalid user admin from 113.162.9.244 port 52769 ssh2 [preauth]
Jan 06 14:35:26 np0005575490.novalocal sshd-session[7296]: Disconnecting invalid user admin 113.162.9.244 port 52769: Too many authentication failures [preauth]
Jan 06 14:35:35 np0005575490.novalocal sshd-session[7477]: Invalid user admin from 113.162.9.244 port 52907
Jan 06 14:35:36 np0005575490.novalocal sshd-session[7477]: Received disconnect from 113.162.9.244 port 52907:11: disconnected by user [preauth]
Jan 06 14:35:36 np0005575490.novalocal sshd-session[7477]: Disconnected from invalid user admin 113.162.9.244 port 52907 [preauth]
Jan 06 14:35:43 np0005575490.novalocal sshd-session[7479]: Invalid user oracle from 113.162.9.244 port 53041
Jan 06 14:35:44 np0005575490.novalocal sshd-session[7479]: error: maximum authentication attempts exceeded for invalid user oracle from 113.162.9.244 port 53041 ssh2 [preauth]
Jan 06 14:35:44 np0005575490.novalocal sshd-session[7479]: Disconnecting invalid user oracle 113.162.9.244 port 53041: Too many authentication failures [preauth]
Jan 06 14:35:52 np0005575490.novalocal sshd-session[7481]: Invalid user oracle from 113.162.9.244 port 53169
Jan 06 14:35:53 np0005575490.novalocal sshd-session[7481]: error: maximum authentication attempts exceeded for invalid user oracle from 113.162.9.244 port 53169 ssh2 [preauth]
Jan 06 14:35:53 np0005575490.novalocal sshd-session[7481]: Disconnecting invalid user oracle 113.162.9.244 port 53169: Too many authentication failures [preauth]
Jan 06 14:36:00 np0005575490.novalocal sshd-session[7483]: Invalid user oracle from 113.162.9.244 port 53287
Jan 06 14:36:01 np0005575490.novalocal sshd-session[7483]: Received disconnect from 113.162.9.244 port 53287:11: disconnected by user [preauth]
Jan 06 14:36:01 np0005575490.novalocal sshd-session[7483]: Disconnected from invalid user oracle 113.162.9.244 port 53287 [preauth]
Jan 06 14:36:08 np0005575490.novalocal sshd-session[7485]: Invalid user usuario from 113.162.9.244 port 53407
Jan 06 14:36:10 np0005575490.novalocal sshd-session[7485]: error: maximum authentication attempts exceeded for invalid user usuario from 113.162.9.244 port 53407 ssh2 [preauth]
Jan 06 14:36:10 np0005575490.novalocal sshd-session[7485]: Disconnecting invalid user usuario 113.162.9.244 port 53407: Too many authentication failures [preauth]
Jan 06 14:36:17 np0005575490.novalocal sshd-session[7487]: Invalid user usuario from 113.162.9.244 port 53540
Jan 06 14:36:19 np0005575490.novalocal sshd-session[7487]: error: maximum authentication attempts exceeded for invalid user usuario from 113.162.9.244 port 53540 ssh2 [preauth]
Jan 06 14:36:19 np0005575490.novalocal sshd-session[7487]: Disconnecting invalid user usuario 113.162.9.244 port 53540: Too many authentication failures [preauth]
Jan 06 14:36:25 np0005575490.novalocal sshd-session[7489]: Invalid user usuario from 113.162.9.244 port 53658
Jan 06 14:36:26 np0005575490.novalocal sshd-session[7489]: Received disconnect from 113.162.9.244 port 53658:11: disconnected by user [preauth]
Jan 06 14:36:26 np0005575490.novalocal sshd-session[7489]: Disconnected from invalid user usuario 113.162.9.244 port 53658 [preauth]
Jan 06 14:36:33 np0005575490.novalocal sshd-session[7491]: Invalid user test from 113.162.9.244 port 53770
Jan 06 14:36:35 np0005575490.novalocal sshd-session[7491]: error: maximum authentication attempts exceeded for invalid user test from 113.162.9.244 port 53770 ssh2 [preauth]
Jan 06 14:36:35 np0005575490.novalocal sshd-session[7491]: Disconnecting invalid user test 113.162.9.244 port 53770: Too many authentication failures [preauth]
Jan 06 14:36:45 np0005575490.novalocal sshd-session[7493]: Connection closed by 113.162.9.244 port 53900 [preauth]
Jan 06 14:38:06 np0005575490.novalocal systemd[4299]: Created slice User Background Tasks Slice.
Jan 06 14:38:06 np0005575490.novalocal systemd[4299]: Starting Cleanup of User's Temporary Files and Directories...
Jan 06 14:38:06 np0005575490.novalocal systemd[4299]: Finished Cleanup of User's Temporary Files and Directories.
Jan 06 14:41:15 np0005575490.novalocal sshd-session[7501]: Accepted publickey for zuul from 38.102.83.114 port 45338 ssh2: RSA SHA256:/tsYtTPHPswvCHUyDjuXJcnXXQRlaCz6QYAgaouSN5U
Jan 06 14:41:15 np0005575490.novalocal systemd-logind[791]: New session 4 of user zuul.
Jan 06 14:41:15 np0005575490.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 06 14:41:15 np0005575490.novalocal sshd-session[7501]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 14:41:15 np0005575490.novalocal sudo[7528]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raqkdkjzuerolcuydvxucqrlemggodoi ; /usr/bin/python3'
Jan 06 14:41:15 np0005575490.novalocal sudo[7528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:41:15 np0005575490.novalocal python3[7530]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-c6a7-5c1e-000000002179-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 14:41:15 np0005575490.novalocal sudo[7528]: pam_unix(sudo:session): session closed for user root
Jan 06 14:41:15 np0005575490.novalocal sudo[7556]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycshpkpxkbvvealozxmyvpnnbxjsrsoz ; /usr/bin/python3'
Jan 06 14:41:15 np0005575490.novalocal sudo[7556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:41:16 np0005575490.novalocal python3[7558]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:41:16 np0005575490.novalocal sudo[7556]: pam_unix(sudo:session): session closed for user root
Jan 06 14:41:16 np0005575490.novalocal sudo[7582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scjnwkjtdfjsvuwpmkrvmrtexxusszqn ; /usr/bin/python3'
Jan 06 14:41:16 np0005575490.novalocal sudo[7582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:41:16 np0005575490.novalocal python3[7585]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:41:16 np0005575490.novalocal sudo[7582]: pam_unix(sudo:session): session closed for user root
Jan 06 14:41:16 np0005575490.novalocal sudo[7609]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmbgumkdgrxlcirarsjysfphafpyhwrq ; /usr/bin/python3'
Jan 06 14:41:16 np0005575490.novalocal sudo[7609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:41:16 np0005575490.novalocal python3[7611]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:41:16 np0005575490.novalocal sudo[7609]: pam_unix(sudo:session): session closed for user root
Jan 06 14:41:16 np0005575490.novalocal sudo[7635]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebcnvrnkdaelxtqasfnrrvbzjwrtvhbn ; /usr/bin/python3'
Jan 06 14:41:16 np0005575490.novalocal sudo[7635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:41:16 np0005575490.novalocal python3[7637]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:41:16 np0005575490.novalocal sudo[7635]: pam_unix(sudo:session): session closed for user root
Jan 06 14:41:17 np0005575490.novalocal sudo[7661]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfksvvquqwmosppmxqjjrmtrausyzgxi ; /usr/bin/python3'
Jan 06 14:41:17 np0005575490.novalocal sudo[7661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:41:17 np0005575490.novalocal python3[7663]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:41:17 np0005575490.novalocal sudo[7661]: pam_unix(sudo:session): session closed for user root
Jan 06 14:41:17 np0005575490.novalocal sudo[7739]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dppvmzemsqwkvxsqrqhgkqwbqlbqdaqd ; /usr/bin/python3'
Jan 06 14:41:17 np0005575490.novalocal sudo[7739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:41:17 np0005575490.novalocal python3[7741]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 06 14:41:17 np0005575490.novalocal sudo[7739]: pam_unix(sudo:session): session closed for user root
Jan 06 14:41:18 np0005575490.novalocal sudo[7812]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpuupptopfrymqvcgrboctxazacktjla ; /usr/bin/python3'
Jan 06 14:41:18 np0005575490.novalocal sudo[7812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:41:18 np0005575490.novalocal python3[7814]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767710477.6292672-513-47324767446415/source _original_basename=tmpso2d2zm4 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:41:18 np0005575490.novalocal sudo[7812]: pam_unix(sudo:session): session closed for user root
Jan 06 14:41:19 np0005575490.novalocal sudo[7862]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfeinzoofcjsmmtmdlmtpqvswkbxabdy ; /usr/bin/python3'
Jan 06 14:41:19 np0005575490.novalocal sudo[7862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:41:19 np0005575490.novalocal python3[7864]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 06 14:41:19 np0005575490.novalocal systemd[1]: Reloading.
Jan 06 14:41:19 np0005575490.novalocal systemd-rc-local-generator[7882]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 14:41:19 np0005575490.novalocal sudo[7862]: pam_unix(sudo:session): session closed for user root
Jan 06 14:41:20 np0005575490.novalocal sudo[7918]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acevktcnonnxrjqodupdvllwlvzaedjj ; /usr/bin/python3'
Jan 06 14:41:20 np0005575490.novalocal sudo[7918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:41:20 np0005575490.novalocal python3[7920]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 06 14:41:20 np0005575490.novalocal sudo[7918]: pam_unix(sudo:session): session closed for user root
Jan 06 14:41:21 np0005575490.novalocal sudo[7944]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpmeqerhvscyclerijmexfxnsymngdgs ; /usr/bin/python3'
Jan 06 14:41:21 np0005575490.novalocal sudo[7944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:41:21 np0005575490.novalocal python3[7946]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 14:41:21 np0005575490.novalocal sudo[7944]: pam_unix(sudo:session): session closed for user root
Jan 06 14:41:21 np0005575490.novalocal sudo[7972]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moaysiqrpmbjartgylxixscavnmieyym ; /usr/bin/python3'
Jan 06 14:41:21 np0005575490.novalocal sudo[7972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:41:21 np0005575490.novalocal python3[7974]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 14:41:21 np0005575490.novalocal sudo[7972]: pam_unix(sudo:session): session closed for user root
Jan 06 14:41:21 np0005575490.novalocal sudo[8001]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmkvodaakzcttnkbitrajmvtagnqwpwk ; /usr/bin/python3'
Jan 06 14:41:21 np0005575490.novalocal sudo[8001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:41:22 np0005575490.novalocal python3[8003]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 14:41:22 np0005575490.novalocal sudo[8001]: pam_unix(sudo:session): session closed for user root
Jan 06 14:41:22 np0005575490.novalocal sudo[8029]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-artsaplyfumfkawqkfnjlwuqvsvkqaic ; /usr/bin/python3'
Jan 06 14:41:22 np0005575490.novalocal sudo[8029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:41:22 np0005575490.novalocal python3[8031]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 14:41:22 np0005575490.novalocal sudo[8029]: pam_unix(sudo:session): session closed for user root
Jan 06 14:41:22 np0005575490.novalocal python3[8058]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-c6a7-5c1e-000000002180-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 14:41:23 np0005575490.novalocal python3[8088]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 06 14:41:25 np0005575490.novalocal sshd-session[7504]: Connection closed by 38.102.83.114 port 45338
Jan 06 14:41:25 np0005575490.novalocal sshd-session[7501]: pam_unix(sshd:session): session closed for user zuul
Jan 06 14:41:25 np0005575490.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 06 14:41:25 np0005575490.novalocal systemd[1]: session-4.scope: Consumed 4.604s CPU time.
Jan 06 14:41:25 np0005575490.novalocal systemd-logind[791]: Session 4 logged out. Waiting for processes to exit.
Jan 06 14:41:25 np0005575490.novalocal systemd-logind[791]: Removed session 4.
Jan 06 14:41:26 np0005575490.novalocal sshd-session[8092]: Accepted publickey for zuul from 38.102.83.114 port 59410 ssh2: RSA SHA256:/tsYtTPHPswvCHUyDjuXJcnXXQRlaCz6QYAgaouSN5U
Jan 06 14:41:26 np0005575490.novalocal systemd-logind[791]: New session 5 of user zuul.
Jan 06 14:41:26 np0005575490.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 06 14:41:26 np0005575490.novalocal sshd-session[8092]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 14:41:26 np0005575490.novalocal sudo[8119]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmwxuparyjeizsmynzgbqwjdvuyghmcw ; /usr/bin/python3'
Jan 06 14:41:26 np0005575490.novalocal sudo[8119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:41:27 np0005575490.novalocal python3[8121]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 06 14:41:33 np0005575490.novalocal setsebool[8159]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 06 14:41:33 np0005575490.novalocal setsebool[8159]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 06 14:41:44 np0005575490.novalocal kernel: SELinux:  Converting 383 SID table entries...
Jan 06 14:41:44 np0005575490.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 06 14:41:44 np0005575490.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 06 14:41:44 np0005575490.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 06 14:41:44 np0005575490.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 06 14:41:44 np0005575490.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 06 14:41:44 np0005575490.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 06 14:41:44 np0005575490.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 06 14:41:56 np0005575490.novalocal kernel: SELinux:  Converting 386 SID table entries...
Jan 06 14:41:56 np0005575490.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 06 14:41:56 np0005575490.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 06 14:41:56 np0005575490.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 06 14:41:56 np0005575490.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 06 14:41:56 np0005575490.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 06 14:41:56 np0005575490.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 06 14:41:56 np0005575490.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 06 14:42:13 np0005575490.novalocal dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 06 14:42:14 np0005575490.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 06 14:42:14 np0005575490.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 06 14:42:14 np0005575490.novalocal systemd[1]: Reloading.
Jan 06 14:42:14 np0005575490.novalocal systemd-rc-local-generator[8921]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 14:42:14 np0005575490.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 06 14:42:15 np0005575490.novalocal sudo[8119]: pam_unix(sudo:session): session closed for user root
Jan 06 14:42:16 np0005575490.novalocal python3[10100]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-b0c2-c048-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 14:42:17 np0005575490.novalocal kernel: evm: overlay not supported
Jan 06 14:42:17 np0005575490.novalocal systemd[4299]: Starting D-Bus User Message Bus...
Jan 06 14:42:17 np0005575490.novalocal dbus-broker-launch[11020]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 06 14:42:17 np0005575490.novalocal dbus-broker-launch[11020]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 06 14:42:17 np0005575490.novalocal systemd[4299]: Started D-Bus User Message Bus.
Jan 06 14:42:17 np0005575490.novalocal dbus-broker-lau[11020]: Ready
Jan 06 14:42:17 np0005575490.novalocal systemd[4299]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 06 14:42:17 np0005575490.novalocal systemd[4299]: Created slice Slice /user.
Jan 06 14:42:17 np0005575490.novalocal systemd[4299]: podman-10847.scope: unit configures an IP firewall, but not running as root.
Jan 06 14:42:17 np0005575490.novalocal systemd[4299]: (This warning is only shown for the first unit using IP firewalling.)
Jan 06 14:42:17 np0005575490.novalocal systemd[4299]: Started podman-10847.scope.
Jan 06 14:42:17 np0005575490.novalocal systemd[4299]: Started podman-pause-4d4136a3.scope.
Jan 06 14:42:17 np0005575490.novalocal sudo[11643]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mleunbemigiislftbptvaktuhiyblcdz ; /usr/bin/python3'
Jan 06 14:42:17 np0005575490.novalocal sudo[11643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:42:18 np0005575490.novalocal python3[11669]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.224:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.224:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:42:18 np0005575490.novalocal python3[11669]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 06 14:42:18 np0005575490.novalocal sudo[11643]: pam_unix(sudo:session): session closed for user root
Jan 06 14:42:18 np0005575490.novalocal sshd-session[8095]: Connection closed by 38.102.83.114 port 59410
Jan 06 14:42:18 np0005575490.novalocal sshd-session[8092]: pam_unix(sshd:session): session closed for user zuul
Jan 06 14:42:18 np0005575490.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 06 14:42:18 np0005575490.novalocal systemd[1]: session-5.scope: Consumed 45.462s CPU time.
Jan 06 14:42:18 np0005575490.novalocal systemd-logind[791]: Session 5 logged out. Waiting for processes to exit.
Jan 06 14:42:18 np0005575490.novalocal systemd-logind[791]: Removed session 5.
Jan 06 14:42:35 np0005575490.novalocal irqbalance[779]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 06 14:42:35 np0005575490.novalocal irqbalance[779]: IRQ 27 affinity is now unmanaged
Jan 06 14:42:37 np0005575490.novalocal sshd-session[18344]: Unable to negotiate with 38.102.83.46 port 40980: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 06 14:42:37 np0005575490.novalocal sshd-session[18343]: Connection closed by 38.102.83.46 port 40966 [preauth]
Jan 06 14:42:37 np0005575490.novalocal sshd-session[18342]: Connection closed by 38.102.83.46 port 40970 [preauth]
Jan 06 14:42:37 np0005575490.novalocal sshd-session[18341]: Unable to negotiate with 38.102.83.46 port 40972: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 06 14:42:37 np0005575490.novalocal sshd-session[18339]: Unable to negotiate with 38.102.83.46 port 40992: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 06 14:42:41 np0005575490.novalocal sshd-session[19114]: Accepted publickey for zuul from 38.102.83.114 port 53552 ssh2: RSA SHA256:/tsYtTPHPswvCHUyDjuXJcnXXQRlaCz6QYAgaouSN5U
Jan 06 14:42:41 np0005575490.novalocal systemd-logind[791]: New session 6 of user zuul.
Jan 06 14:42:41 np0005575490.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 06 14:42:41 np0005575490.novalocal sshd-session[19114]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 14:42:42 np0005575490.novalocal python3[19205]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEwR44EPYk/wS45p/Ud3tZeDg8ZzF5UcD+pCPhgU5dLL714TQ5v2KzMW/ta/+XrlhCWH/XhHNkrwsHaYDAzp7NQ= zuul@np0005575489.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:42:42 np0005575490.novalocal sudo[19361]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aribzeehrzikdvyzqdgtdfnqtswdjwqt ; /usr/bin/python3'
Jan 06 14:42:42 np0005575490.novalocal sudo[19361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:42:42 np0005575490.novalocal python3[19371]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEwR44EPYk/wS45p/Ud3tZeDg8ZzF5UcD+pCPhgU5dLL714TQ5v2KzMW/ta/+XrlhCWH/XhHNkrwsHaYDAzp7NQ= zuul@np0005575489.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:42:42 np0005575490.novalocal sudo[19361]: pam_unix(sudo:session): session closed for user root
Jan 06 14:42:43 np0005575490.novalocal sudo[19610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpknetimvuctswtqqhcajnkhikpjibom ; /usr/bin/python3'
Jan 06 14:42:43 np0005575490.novalocal sudo[19610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:42:43 np0005575490.novalocal python3[19620]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005575490.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 06 14:42:43 np0005575490.novalocal useradd[19673]: new group: name=cloud-admin, GID=1002
Jan 06 14:42:43 np0005575490.novalocal useradd[19673]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 06 14:42:43 np0005575490.novalocal sudo[19610]: pam_unix(sudo:session): session closed for user root
Jan 06 14:42:43 np0005575490.novalocal sudo[19777]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opaatwoxsqfdwtgfprawqktqikkdyaor ; /usr/bin/python3'
Jan 06 14:42:43 np0005575490.novalocal sudo[19777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:42:43 np0005575490.novalocal python3[19785]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEwR44EPYk/wS45p/Ud3tZeDg8ZzF5UcD+pCPhgU5dLL714TQ5v2KzMW/ta/+XrlhCWH/XhHNkrwsHaYDAzp7NQ= zuul@np0005575489.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 06 14:42:43 np0005575490.novalocal sudo[19777]: pam_unix(sudo:session): session closed for user root
Jan 06 14:42:44 np0005575490.novalocal sudo[19987]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yspmneuxhygxnozbgtgjhzugbnaddgir ; /usr/bin/python3'
Jan 06 14:42:44 np0005575490.novalocal sudo[19987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:42:44 np0005575490.novalocal python3[19993]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 06 14:42:44 np0005575490.novalocal sudo[19987]: pam_unix(sudo:session): session closed for user root
Jan 06 14:42:44 np0005575490.novalocal sudo[20200]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhfdyjskaawfepkkhievtmssujijlafg ; /usr/bin/python3'
Jan 06 14:42:44 np0005575490.novalocal sudo[20200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:42:44 np0005575490.novalocal python3[20210]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1767710564.042192-135-8940498329937/source _original_basename=tmpfuktwro7 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:42:44 np0005575490.novalocal sudo[20200]: pam_unix(sudo:session): session closed for user root
Jan 06 14:42:45 np0005575490.novalocal sudo[20466]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zexltoaisepfwdtdlpstftzpjlkiprpm ; /usr/bin/python3'
Jan 06 14:42:45 np0005575490.novalocal sudo[20466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:42:45 np0005575490.novalocal python3[20476]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 06 14:42:45 np0005575490.novalocal systemd[1]: Starting Hostname Service...
Jan 06 14:42:45 np0005575490.novalocal systemd[1]: Started Hostname Service.
Jan 06 14:42:45 np0005575490.novalocal systemd-hostnamed[20557]: Changed pretty hostname to 'compute-0'
Jan 06 14:42:45 compute-0 systemd-hostnamed[20557]: Hostname set to <compute-0> (static)
Jan 06 14:42:45 compute-0 NetworkManager[7190]: <info>  [1767710565.9130] hostname: static hostname changed from "np0005575490.novalocal" to "compute-0"
Jan 06 14:42:45 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 06 14:42:45 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 06 14:42:45 compute-0 sudo[20466]: pam_unix(sudo:session): session closed for user root
Jan 06 14:42:46 compute-0 sshd-session[19154]: Connection closed by 38.102.83.114 port 53552
Jan 06 14:42:46 compute-0 sshd-session[19114]: pam_unix(sshd:session): session closed for user zuul
Jan 06 14:42:46 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 06 14:42:46 compute-0 systemd[1]: session-6.scope: Consumed 2.686s CPU time.
Jan 06 14:42:46 compute-0 systemd-logind[791]: Session 6 logged out. Waiting for processes to exit.
Jan 06 14:42:46 compute-0 systemd-logind[791]: Removed session 6.
Jan 06 14:42:55 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 06 14:43:15 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 06 14:43:23 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 06 14:43:23 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 06 14:43:23 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 19.884s CPU time.
Jan 06 14:43:23 compute-0 systemd[1]: run-re4ae2dd76c8849e29aa4fc80d47c44d6.service: Deactivated successfully.
Jan 06 14:47:25 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 06 14:47:25 compute-0 sshd-session[29993]: Accepted publickey for zuul from 38.102.83.46 port 60042 ssh2: RSA SHA256:/tsYtTPHPswvCHUyDjuXJcnXXQRlaCz6QYAgaouSN5U
Jan 06 14:47:25 compute-0 systemd-logind[791]: New session 7 of user zuul.
Jan 06 14:47:25 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 06 14:47:25 compute-0 sshd-session[29993]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 14:47:25 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 06 14:47:25 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 06 14:47:25 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 06 14:47:26 compute-0 python3[30072]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 14:47:27 compute-0 sudo[30186]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhojzeshbuvrbxsitqohyickjhztupbo ; /usr/bin/python3'
Jan 06 14:47:27 compute-0 sudo[30186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:47:28 compute-0 python3[30188]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 06 14:47:28 compute-0 sudo[30186]: pam_unix(sudo:session): session closed for user root
Jan 06 14:47:28 compute-0 sudo[30259]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udxkeaqzckqlcxldgulsjaabqskaundk ; /usr/bin/python3'
Jan 06 14:47:28 compute-0 sudo[30259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:47:28 compute-0 python3[30261]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767710847.663425-33572-97139042012059/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:47:28 compute-0 sudo[30259]: pam_unix(sudo:session): session closed for user root
Jan 06 14:47:28 compute-0 sudo[30285]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwqzhxuwswrjxjlguzywbowlnyycrsrt ; /usr/bin/python3'
Jan 06 14:47:28 compute-0 sudo[30285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:47:28 compute-0 python3[30287]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 06 14:47:28 compute-0 sudo[30285]: pam_unix(sudo:session): session closed for user root
Jan 06 14:47:29 compute-0 sudo[30358]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysglwxyycnhrudzybewctgrkjljyjnvd ; /usr/bin/python3'
Jan 06 14:47:29 compute-0 sudo[30358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:47:29 compute-0 python3[30360]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767710847.663425-33572-97139042012059/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:47:29 compute-0 sudo[30358]: pam_unix(sudo:session): session closed for user root
Jan 06 14:47:29 compute-0 sudo[30384]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhyttmmistjkyuyfstzpoidrqgzwyjvi ; /usr/bin/python3'
Jan 06 14:47:29 compute-0 sudo[30384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:47:29 compute-0 python3[30386]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 06 14:47:29 compute-0 sudo[30384]: pam_unix(sudo:session): session closed for user root
Jan 06 14:47:29 compute-0 sudo[30457]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsdunhdkcpivcwlwdrowceguwjhizzyu ; /usr/bin/python3'
Jan 06 14:47:29 compute-0 sudo[30457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:47:30 compute-0 python3[30459]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767710847.663425-33572-97139042012059/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:47:30 compute-0 sudo[30457]: pam_unix(sudo:session): session closed for user root
Jan 06 14:47:30 compute-0 sudo[30483]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myhtxlaytzffpzqgszvpywibluhbzfgc ; /usr/bin/python3'
Jan 06 14:47:30 compute-0 sudo[30483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:47:30 compute-0 python3[30485]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 06 14:47:30 compute-0 sudo[30483]: pam_unix(sudo:session): session closed for user root
Jan 06 14:47:30 compute-0 sudo[30556]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-undlfchhkswjuqjfpyzezxvanyfbpmod ; /usr/bin/python3'
Jan 06 14:47:30 compute-0 sudo[30556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:47:30 compute-0 python3[30558]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767710847.663425-33572-97139042012059/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:47:30 compute-0 sudo[30556]: pam_unix(sudo:session): session closed for user root
Jan 06 14:47:30 compute-0 sudo[30582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dutasxhycztdwohckfewsuaqqhnvgjqa ; /usr/bin/python3'
Jan 06 14:47:30 compute-0 sudo[30582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:47:31 compute-0 python3[30584]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 06 14:47:31 compute-0 sudo[30582]: pam_unix(sudo:session): session closed for user root
Jan 06 14:47:31 compute-0 sudo[30655]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrmrgtsvikpeomqgdmchyaffkbpzqtjo ; /usr/bin/python3'
Jan 06 14:47:31 compute-0 sudo[30655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:47:31 compute-0 python3[30657]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767710847.663425-33572-97139042012059/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:47:31 compute-0 sudo[30655]: pam_unix(sudo:session): session closed for user root
Jan 06 14:47:31 compute-0 sudo[30681]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikfbtaxixubwvlzwbosaupzvuhjkesko ; /usr/bin/python3'
Jan 06 14:47:31 compute-0 sudo[30681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:47:31 compute-0 python3[30683]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 06 14:47:31 compute-0 sudo[30681]: pam_unix(sudo:session): session closed for user root
Jan 06 14:47:32 compute-0 sudo[30754]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwcbkgeopsqovyvgcsbevberycolkaiw ; /usr/bin/python3'
Jan 06 14:47:32 compute-0 sudo[30754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:47:32 compute-0 python3[30756]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767710847.663425-33572-97139042012059/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:47:32 compute-0 sudo[30754]: pam_unix(sudo:session): session closed for user root
Jan 06 14:47:32 compute-0 sudo[30780]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvidadvzylxaxerzcmbnyfngoowqvyac ; /usr/bin/python3'
Jan 06 14:47:32 compute-0 sudo[30780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:47:32 compute-0 python3[30782]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 06 14:47:32 compute-0 sudo[30780]: pam_unix(sudo:session): session closed for user root
Jan 06 14:47:32 compute-0 sudo[30853]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgzvgslpwdtcwjhdovxrnbflzvtqijsd ; /usr/bin/python3'
Jan 06 14:47:32 compute-0 sudo[30853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 14:47:33 compute-0 python3[30855]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767710847.663425-33572-97139042012059/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 14:47:33 compute-0 sudo[30853]: pam_unix(sudo:session): session closed for user root
Jan 06 14:47:35 compute-0 sshd-session[30880]: Connection closed by 192.168.122.11 port 42752 [preauth]
Jan 06 14:47:35 compute-0 sshd-session[30881]: Connection closed by 192.168.122.11 port 42768 [preauth]
Jan 06 14:47:35 compute-0 sshd-session[30884]: Unable to negotiate with 192.168.122.11 port 42794: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 06 14:47:35 compute-0 sshd-session[30882]: Unable to negotiate with 192.168.122.11 port 42800: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 06 14:47:35 compute-0 sshd-session[30885]: Unable to negotiate with 192.168.122.11 port 42780: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 06 14:50:04 compute-0 sshd-session[30892]: Connection closed by 54.164.170.242 port 51436 [preauth]
Jan 06 14:50:50 compute-0 python3[30917]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 14:55:50 compute-0 sshd-session[29998]: Received disconnect from 38.102.83.46 port 60042:11: disconnected by user
Jan 06 14:55:50 compute-0 sshd-session[29998]: Disconnected from user zuul 38.102.83.46 port 60042
Jan 06 14:55:50 compute-0 sshd-session[29993]: pam_unix(sshd:session): session closed for user zuul
Jan 06 14:55:50 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 06 14:55:50 compute-0 systemd[1]: session-7.scope: Consumed 6.092s CPU time.
Jan 06 14:55:50 compute-0 systemd-logind[791]: Session 7 logged out. Waiting for processes to exit.
Jan 06 14:55:50 compute-0 systemd-logind[791]: Removed session 7.
Jan 06 15:00:19 compute-0 sshd-session[30927]: Invalid user user from 78.128.112.74 port 35394
Jan 06 15:00:19 compute-0 sshd-session[30927]: Connection closed by invalid user user 78.128.112.74 port 35394 [preauth]
Jan 06 15:01:01 compute-0 CROND[30931]: (root) CMD (run-parts /etc/cron.hourly)
Jan 06 15:01:01 compute-0 run-parts[30934]: (/etc/cron.hourly) starting 0anacron
Jan 06 15:01:01 compute-0 anacron[30942]: Anacron started on 2026-01-06
Jan 06 15:01:01 compute-0 anacron[30942]: Will run job `cron.daily' in 49 min.
Jan 06 15:01:01 compute-0 anacron[30942]: Will run job `cron.weekly' in 69 min.
Jan 06 15:01:01 compute-0 anacron[30942]: Will run job `cron.monthly' in 89 min.
Jan 06 15:01:01 compute-0 anacron[30942]: Jobs will be executed sequentially
Jan 06 15:01:01 compute-0 run-parts[30944]: (/etc/cron.hourly) finished 0anacron
Jan 06 15:01:01 compute-0 CROND[30930]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 06 15:05:52 compute-0 sshd-session[30947]: Accepted publickey for zuul from 192.168.122.30 port 56834 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 15:05:52 compute-0 systemd-logind[791]: New session 8 of user zuul.
Jan 06 15:05:52 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 06 15:05:52 compute-0 sshd-session[30947]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:05:53 compute-0 python3.9[31100]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:05:54 compute-0 sudo[31279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nabekttitodmayqlgsludfyfomprutsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767711953.9790268-27-5006678910150/AnsiballZ_command.py'
Jan 06 15:05:54 compute-0 sudo[31279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:05:54 compute-0 python3.9[31281]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:06:02 compute-0 sudo[31279]: pam_unix(sudo:session): session closed for user root
Jan 06 15:06:03 compute-0 sshd-session[30950]: Connection closed by 192.168.122.30 port 56834
Jan 06 15:06:03 compute-0 sshd-session[30947]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:06:03 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 06 15:06:03 compute-0 systemd[1]: session-8.scope: Consumed 8.858s CPU time.
Jan 06 15:06:03 compute-0 systemd-logind[791]: Session 8 logged out. Waiting for processes to exit.
Jan 06 15:06:03 compute-0 systemd-logind[791]: Removed session 8.
Jan 06 15:06:05 compute-0 irqbalance[779]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 06 15:06:05 compute-0 irqbalance[779]: IRQ 26 affinity is now unmanaged
Jan 06 15:06:09 compute-0 sshd-session[31338]: Accepted publickey for zuul from 192.168.122.30 port 43336 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 15:06:09 compute-0 systemd-logind[791]: New session 9 of user zuul.
Jan 06 15:06:09 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 06 15:06:09 compute-0 sshd-session[31338]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:06:10 compute-0 python3.9[31491]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:06:10 compute-0 sshd-session[31341]: Connection closed by 192.168.122.30 port 43336
Jan 06 15:06:10 compute-0 sshd-session[31338]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:06:10 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 06 15:06:10 compute-0 systemd-logind[791]: Session 9 logged out. Waiting for processes to exit.
Jan 06 15:06:10 compute-0 systemd-logind[791]: Removed session 9.
Jan 06 15:06:27 compute-0 sshd-session[31520]: Accepted publickey for zuul from 192.168.122.30 port 53254 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 15:06:27 compute-0 systemd-logind[791]: New session 10 of user zuul.
Jan 06 15:06:27 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 06 15:06:27 compute-0 sshd-session[31520]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:06:28 compute-0 python3.9[31673]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 06 15:06:29 compute-0 python3.9[31847]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:06:30 compute-0 sudo[31997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewrtewzjadyxcgvaiwfvzksqxtumuknf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767711989.9553726-40-26189871756536/AnsiballZ_command.py'
Jan 06 15:06:30 compute-0 sudo[31997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:06:30 compute-0 python3.9[31999]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:06:30 compute-0 sudo[31997]: pam_unix(sudo:session): session closed for user root
Jan 06 15:06:31 compute-0 sudo[32150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewmkwanmauiyjvynwmsvtydkngqrdiqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767711991.0308306-52-262582860968265/AnsiballZ_stat.py'
Jan 06 15:06:31 compute-0 sudo[32150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:06:31 compute-0 python3.9[32152]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:06:31 compute-0 sudo[32150]: pam_unix(sudo:session): session closed for user root
Jan 06 15:06:32 compute-0 sudo[32302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuexjyaehxlnebbblxvvyhovdrrodtpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767711991.9361289-60-158211563452157/AnsiballZ_file.py'
Jan 06 15:06:32 compute-0 sudo[32302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:06:32 compute-0 python3.9[32304]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:06:32 compute-0 sudo[32302]: pam_unix(sudo:session): session closed for user root
Jan 06 15:06:33 compute-0 sudo[32454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkqzsvwdosvianuhnqxnxagzersskvis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767711993.0887794-68-162662408645345/AnsiballZ_stat.py'
Jan 06 15:06:33 compute-0 sudo[32454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:06:33 compute-0 python3.9[32456]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:06:33 compute-0 sudo[32454]: pam_unix(sudo:session): session closed for user root
Jan 06 15:06:34 compute-0 sudo[32577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmpmsekmdciauiixloeqfsvqarsquveo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767711993.0887794-68-162662408645345/AnsiballZ_copy.py'
Jan 06 15:06:34 compute-0 sudo[32577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:06:34 compute-0 python3.9[32579]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1767711993.0887794-68-162662408645345/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:06:34 compute-0 sudo[32577]: pam_unix(sudo:session): session closed for user root
Jan 06 15:06:34 compute-0 sudo[32729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjhohbvagoaarozeoslwddwbjrotuhja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767711994.6482246-83-6914784407441/AnsiballZ_setup.py'
Jan 06 15:06:34 compute-0 sudo[32729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:06:35 compute-0 python3.9[32731]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:06:35 compute-0 sudo[32729]: pam_unix(sudo:session): session closed for user root
Jan 06 15:06:35 compute-0 sudo[32885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdcjcqkcqxgdssftryfzyguuxkmnwywl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767711995.608051-91-191814572901023/AnsiballZ_file.py'
Jan 06 15:06:35 compute-0 sudo[32885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:06:36 compute-0 python3.9[32887]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:06:36 compute-0 sudo[32885]: pam_unix(sudo:session): session closed for user root
Jan 06 15:06:36 compute-0 sudo[33037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raitruqkpfhfznfeyglpqfdduiebuang ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767711996.412579-100-236636897149378/AnsiballZ_file.py'
Jan 06 15:06:36 compute-0 sudo[33037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:06:36 compute-0 python3.9[33039]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:06:37 compute-0 sudo[33037]: pam_unix(sudo:session): session closed for user root
Jan 06 15:06:37 compute-0 python3.9[33189]: ansible-ansible.builtin.service_facts Invoked
Jan 06 15:06:43 compute-0 python3.9[33442]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:06:44 compute-0 python3.9[33592]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:06:45 compute-0 python3.9[33746]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:06:46 compute-0 sudo[33902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kugfcisbvzzmcvosfplfzayrzbshjhjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712005.935904-148-102725283437267/AnsiballZ_setup.py'
Jan 06 15:06:46 compute-0 sudo[33902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:06:46 compute-0 python3.9[33904]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 06 15:06:46 compute-0 sudo[33902]: pam_unix(sudo:session): session closed for user root
Jan 06 15:06:47 compute-0 sudo[33986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klaxiozpkeiqjptpypnghrafrlwbslbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712005.935904-148-102725283437267/AnsiballZ_dnf.py'
Jan 06 15:06:47 compute-0 sudo[33986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:06:47 compute-0 python3.9[33988]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 06 15:08:08 compute-0 systemd[1]: Reloading.
Jan 06 15:08:08 compute-0 systemd-rc-local-generator[34186]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:08:08 compute-0 systemd[1]: Starting dnf makecache...
Jan 06 15:08:08 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 06 15:08:08 compute-0 dnf[34196]: Failed determining last makecache time.
Jan 06 15:08:08 compute-0 dnf[34196]: delorean-openstack-barbican-42b4c41831408a8e323 120 kB/s | 3.0 kB     00:00
Jan 06 15:08:08 compute-0 dnf[34196]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 177 kB/s | 3.0 kB     00:00
Jan 06 15:08:08 compute-0 dnf[34196]: delorean-openstack-cinder-1c00d6490d88e436f26ef 182 kB/s | 3.0 kB     00:00
Jan 06 15:08:08 compute-0 dnf[34196]: delorean-python-stevedore-c4acc5639fd2329372142 189 kB/s | 3.0 kB     00:00
Jan 06 15:08:08 compute-0 dnf[34196]: delorean-python-cloudkitty-tests-tempest-2c80f8 192 kB/s | 3.0 kB     00:00
Jan 06 15:08:08 compute-0 dnf[34196]: delorean-os-refresh-config-9bfc52b5049be2d8de61 199 kB/s | 3.0 kB     00:00
Jan 06 15:08:08 compute-0 dnf[34196]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 201 kB/s | 3.0 kB     00:00
Jan 06 15:08:08 compute-0 dnf[34196]: delorean-python-designate-tests-tempest-347fdbc 187 kB/s | 3.0 kB     00:00
Jan 06 15:08:08 compute-0 dnf[34196]: delorean-openstack-glance-1fd12c29b339f30fe823e 210 kB/s | 3.0 kB     00:00
Jan 06 15:08:08 compute-0 dnf[34196]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 181 kB/s | 3.0 kB     00:00
Jan 06 15:08:09 compute-0 dnf[34196]: delorean-openstack-manila-3c01b7181572c95dac462 211 kB/s | 3.0 kB     00:00
Jan 06 15:08:09 compute-0 dnf[34196]: delorean-python-whitebox-neutron-tests-tempest- 179 kB/s | 3.0 kB     00:00
Jan 06 15:08:09 compute-0 dnf[34196]: delorean-openstack-octavia-ba397f07a7331190208c 181 kB/s | 3.0 kB     00:00
Jan 06 15:08:09 compute-0 dnf[34196]: delorean-openstack-watcher-c014f81a8647287f6dcc 174 kB/s | 3.0 kB     00:00
Jan 06 15:08:09 compute-0 dnf[34196]: delorean-ansible-config_template-5ccaa22121a7ff 204 kB/s | 3.0 kB     00:00
Jan 06 15:08:09 compute-0 dnf[34196]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 196 kB/s | 3.0 kB     00:00
Jan 06 15:08:09 compute-0 dnf[34196]: delorean-openstack-swift-dc98a8463506ac520c469a 188 kB/s | 3.0 kB     00:00
Jan 06 15:08:09 compute-0 dnf[34196]: delorean-python-tempestconf-8515371b7cceebd4282 192 kB/s | 3.0 kB     00:00
Jan 06 15:08:09 compute-0 dnf[34196]: delorean-openstack-heat-ui-013accbfd179753bc3f0 183 kB/s | 3.0 kB     00:00
Jan 06 15:08:09 compute-0 dnf[34196]: CentOS Stream 9 - BaseOS                         43 kB/s | 5.1 kB     00:00
Jan 06 15:08:09 compute-0 dnf[34196]: CentOS Stream 9 - AppStream                      22 kB/s | 5.2 kB     00:00
Jan 06 15:08:09 compute-0 dnf[34196]: CentOS Stream 9 - CRB                            21 kB/s | 5.0 kB     00:00
Jan 06 15:08:10 compute-0 dnf[34196]: CentOS Stream 9 - Extras packages                29 kB/s | 7.3 kB     00:00
Jan 06 15:08:10 compute-0 dnf[34196]: dlrn-antelope-testing                           118 kB/s | 3.0 kB     00:00
Jan 06 15:08:10 compute-0 dnf[34196]: dlrn-antelope-build-deps                        143 kB/s | 3.0 kB     00:00
Jan 06 15:08:10 compute-0 dnf[34196]: centos9-rabbitmq                                 99 kB/s | 3.0 kB     00:00
Jan 06 15:08:10 compute-0 dnf[34196]: centos9-storage                                 108 kB/s | 3.0 kB     00:00
Jan 06 15:08:10 compute-0 dnf[34196]: centos9-opstools                                100 kB/s | 3.0 kB     00:00
Jan 06 15:08:10 compute-0 dnf[34196]: NFV SIG OpenvSwitch                             109 kB/s | 3.0 kB     00:00
Jan 06 15:08:10 compute-0 dnf[34196]: repo-setup-centos-appstream                     186 kB/s | 4.4 kB     00:00
Jan 06 15:08:10 compute-0 dnf[34196]: repo-setup-centos-baseos                        159 kB/s | 3.9 kB     00:00
Jan 06 15:08:10 compute-0 dnf[34196]: repo-setup-centos-highavailability              138 kB/s | 3.9 kB     00:00
Jan 06 15:08:10 compute-0 dnf[34196]: repo-setup-centos-powertools                    190 kB/s | 4.3 kB     00:00
Jan 06 15:08:10 compute-0 dnf[34196]: Extra Packages for Enterprise Linux 9 - x86_64  267 kB/s |  33 kB     00:00
Jan 06 15:08:11 compute-0 dnf[34196]: Metadata cache created.
Jan 06 15:08:11 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 06 15:08:11 compute-0 systemd[1]: Finished dnf makecache.
Jan 06 15:08:11 compute-0 systemd[1]: dnf-makecache.service: Consumed 2.026s CPU time.
Jan 06 15:08:13 compute-0 systemd[1]: Reloading.
Jan 06 15:08:13 compute-0 systemd-rc-local-generator[34270]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:08:13 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 06 15:08:13 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 06 15:08:13 compute-0 systemd[1]: Reloading.
Jan 06 15:08:13 compute-0 systemd-rc-local-generator[34313]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:08:13 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 06 15:08:14 compute-0 dbus-broker-launch[741]: Noticed file-system modification, trigger reload.
Jan 06 15:08:14 compute-0 dbus-broker-launch[741]: Noticed file-system modification, trigger reload.
Jan 06 15:08:14 compute-0 dbus-broker-launch[741]: Noticed file-system modification, trigger reload.
Jan 06 15:09:31 compute-0 kernel: SELinux:  Converting 2718 SID table entries...
Jan 06 15:09:31 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 06 15:09:31 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 06 15:09:31 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 06 15:09:31 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 06 15:09:31 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 06 15:09:31 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 06 15:09:31 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 06 15:09:32 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 06 15:09:32 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 06 15:09:32 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 06 15:09:32 compute-0 systemd[1]: Reloading.
Jan 06 15:09:32 compute-0 systemd-rc-local-generator[34643]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:09:33 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 06 15:09:33 compute-0 sudo[33986]: pam_unix(sudo:session): session closed for user root
Jan 06 15:09:33 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 06 15:09:33 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 06 15:09:33 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.234s CPU time.
Jan 06 15:09:33 compute-0 systemd[1]: run-r3b617980934c4640a3918960740a3b51.service: Deactivated successfully.
Jan 06 15:09:34 compute-0 sudo[35554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaathbcmaslpqovsyoxxfpljfkatlome ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712173.8179262-160-208931121928771/AnsiballZ_command.py'
Jan 06 15:09:34 compute-0 sudo[35554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:09:34 compute-0 python3.9[35556]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:09:35 compute-0 sudo[35554]: pam_unix(sudo:session): session closed for user root
Jan 06 15:09:36 compute-0 sudo[35835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lptfvtqpytlvfsrufdjexnbdpneaclxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712175.4228284-168-38624317006494/AnsiballZ_selinux.py'
Jan 06 15:09:36 compute-0 sudo[35835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:09:36 compute-0 python3.9[35837]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 06 15:09:36 compute-0 sudo[35835]: pam_unix(sudo:session): session closed for user root
Jan 06 15:09:37 compute-0 sudo[35987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaxmpihcaoobodfzuazdrjcxnjettnmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712176.834688-179-124590590697690/AnsiballZ_command.py'
Jan 06 15:09:37 compute-0 sudo[35987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:09:37 compute-0 python3.9[35989]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 06 15:09:40 compute-0 sudo[35987]: pam_unix(sudo:session): session closed for user root
Jan 06 15:09:43 compute-0 sudo[36140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffrcgwduklfsnvesqwxskyvdilnmvfck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712180.649715-187-158552186497592/AnsiballZ_file.py'
Jan 06 15:09:43 compute-0 sudo[36140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:09:44 compute-0 python3.9[36142]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:09:44 compute-0 sudo[36140]: pam_unix(sudo:session): session closed for user root
Jan 06 15:09:44 compute-0 sudo[36293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbxaxnhmbogzctjbhnkaenbojthcfura ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712184.3179471-195-33785233979887/AnsiballZ_mount.py'
Jan 06 15:09:44 compute-0 sudo[36293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:09:46 compute-0 python3.9[36295]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 06 15:09:47 compute-0 sudo[36293]: pam_unix(sudo:session): session closed for user root
Jan 06 15:09:48 compute-0 sudo[36445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alkwbcsbviyimebwzuibkvalegmyovlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712188.6770012-223-143118148639859/AnsiballZ_file.py'
Jan 06 15:09:49 compute-0 sudo[36445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:09:52 compute-0 python3.9[36447]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:09:52 compute-0 sudo[36445]: pam_unix(sudo:session): session closed for user root
Jan 06 15:09:53 compute-0 sudo[36597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mynkgqtapxyuspognswvsfdsgkkxclly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712192.7193851-231-268240640779992/AnsiballZ_stat.py'
Jan 06 15:09:53 compute-0 sudo[36597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:09:53 compute-0 python3.9[36599]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:09:53 compute-0 sudo[36597]: pam_unix(sudo:session): session closed for user root
Jan 06 15:09:54 compute-0 sudo[36720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sffurunwmksnyofpjgiuexujqjymwcwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712192.7193851-231-268240640779992/AnsiballZ_copy.py'
Jan 06 15:09:54 compute-0 sudo[36720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:09:54 compute-0 python3.9[36722]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712192.7193851-231-268240640779992/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b3b451b437c2b09b799acb3e061225350970588a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:09:54 compute-0 sudo[36720]: pam_unix(sudo:session): session closed for user root
Jan 06 15:09:55 compute-0 sudo[36872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ythjtkzndcknpnstmcxqakaefmhadsof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712194.767358-255-203910881551597/AnsiballZ_stat.py'
Jan 06 15:09:55 compute-0 sudo[36872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:09:55 compute-0 python3.9[36874]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:09:55 compute-0 sudo[36872]: pam_unix(sudo:session): session closed for user root
Jan 06 15:09:55 compute-0 sudo[37024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flykicyeaaszareqqikkirpkntfrnwsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712195.5010548-263-123183138247165/AnsiballZ_command.py'
Jan 06 15:09:55 compute-0 sudo[37024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:09:56 compute-0 python3.9[37026]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:09:56 compute-0 sudo[37024]: pam_unix(sudo:session): session closed for user root
Jan 06 15:09:56 compute-0 sudo[37177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebgktborqikxznaswmyhxhzsqprzwqxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712196.2785406-271-86636646672669/AnsiballZ_file.py'
Jan 06 15:09:56 compute-0 sudo[37177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:09:56 compute-0 python3.9[37179]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:09:56 compute-0 sudo[37177]: pam_unix(sudo:session): session closed for user root
Jan 06 15:09:58 compute-0 sudo[37329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhqffelnwrqbilvgycopixxuicrlcjmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712197.512679-282-21181332233906/AnsiballZ_getent.py'
Jan 06 15:09:58 compute-0 sudo[37329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:09:58 compute-0 python3.9[37331]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 06 15:09:58 compute-0 sudo[37329]: pam_unix(sudo:session): session closed for user root
Jan 06 15:09:58 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 06 15:09:59 compute-0 sudo[37483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqakzjwybrmpbiusnkfzprgljecdwdlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712198.9240148-290-100967704212831/AnsiballZ_group.py'
Jan 06 15:09:59 compute-0 sudo[37483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:09:59 compute-0 python3.9[37485]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 06 15:09:59 compute-0 groupadd[37486]: group added to /etc/group: name=qemu, GID=107
Jan 06 15:09:59 compute-0 groupadd[37486]: group added to /etc/gshadow: name=qemu
Jan 06 15:09:59 compute-0 groupadd[37486]: new group: name=qemu, GID=107
Jan 06 15:09:59 compute-0 sudo[37483]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:00 compute-0 sudo[37641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scmsmmupghgdwcrxyednebgmnpyrhdan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712199.9251256-298-135383668114908/AnsiballZ_user.py'
Jan 06 15:10:00 compute-0 sudo[37641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:00 compute-0 python3.9[37643]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 06 15:10:00 compute-0 useradd[37645]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 06 15:10:00 compute-0 sudo[37641]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:01 compute-0 sudo[37801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzkfwfgvydqsysglgjabzwlvanwjtkgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712201.0537677-306-136297188021883/AnsiballZ_getent.py'
Jan 06 15:10:01 compute-0 sudo[37801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:01 compute-0 python3.9[37803]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 06 15:10:01 compute-0 sudo[37801]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:02 compute-0 sudo[37954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxvwzjkagnbwrfklkkfytxbolmbvirwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712201.8756077-314-189988130074582/AnsiballZ_group.py'
Jan 06 15:10:02 compute-0 sudo[37954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:02 compute-0 python3.9[37956]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 06 15:10:02 compute-0 groupadd[37957]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 06 15:10:02 compute-0 groupadd[37957]: group added to /etc/gshadow: name=hugetlbfs
Jan 06 15:10:02 compute-0 groupadd[37957]: new group: name=hugetlbfs, GID=42477
Jan 06 15:10:02 compute-0 sudo[37954]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:03 compute-0 sudo[38112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdzknhisdfvdggvemqhtuupcwjimytjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712203.1425846-323-257677315035375/AnsiballZ_file.py'
Jan 06 15:10:03 compute-0 sudo[38112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:03 compute-0 python3.9[38114]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 06 15:10:03 compute-0 sudo[38112]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:04 compute-0 sudo[38264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uecqemsqqkibemkajmlfftylrrgaiali ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712204.033992-334-115886651033465/AnsiballZ_dnf.py'
Jan 06 15:10:04 compute-0 sudo[38264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:04 compute-0 python3.9[38266]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 06 15:10:08 compute-0 sudo[38264]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:08 compute-0 sudo[38417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njgnwmwlxbrehhogjnrjiukanhihneqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712208.5748713-342-40934946905040/AnsiballZ_file.py'
Jan 06 15:10:08 compute-0 sudo[38417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:09 compute-0 python3.9[38419]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:10:09 compute-0 sudo[38417]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:09 compute-0 sudo[38569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmkadopdujhvznetvrukazcyesehjgpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712209.3159187-350-266787851209856/AnsiballZ_stat.py'
Jan 06 15:10:09 compute-0 sudo[38569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:09 compute-0 python3.9[38571]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:10:09 compute-0 sudo[38569]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:10 compute-0 sudo[38692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaxfrrezsmkilqeiyfruleihrkuyinsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712209.3159187-350-266787851209856/AnsiballZ_copy.py'
Jan 06 15:10:10 compute-0 sudo[38692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:10 compute-0 python3.9[38694]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767712209.3159187-350-266787851209856/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:10:10 compute-0 sudo[38692]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:11 compute-0 sudo[38844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryfoeowbtfsyloetfcujjovogcmyismd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712210.6521122-365-212305233620914/AnsiballZ_systemd.py'
Jan 06 15:10:11 compute-0 sudo[38844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:11 compute-0 python3.9[38846]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 15:10:11 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 06 15:10:11 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 06 15:10:11 compute-0 kernel: Bridge firewalling registered
Jan 06 15:10:11 compute-0 systemd-modules-load[38850]: Inserted module 'br_netfilter'
Jan 06 15:10:11 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 06 15:10:11 compute-0 sudo[38844]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:12 compute-0 sudo[39004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxjxduyadpnxoaafzpunttbskhniwawg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712211.9946904-373-87845114296175/AnsiballZ_stat.py'
Jan 06 15:10:12 compute-0 sudo[39004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:12 compute-0 python3.9[39006]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:10:12 compute-0 sudo[39004]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:13 compute-0 sudo[39127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qivrbbqhboghmzkfyoamtaqojdjmhxuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712211.9946904-373-87845114296175/AnsiballZ_copy.py'
Jan 06 15:10:13 compute-0 sudo[39127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:13 compute-0 python3.9[39129]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767712211.9946904-373-87845114296175/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:10:13 compute-0 sudo[39127]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:14 compute-0 sudo[39279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixqhytzebqjzxrbgqryrunsjeastlccv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712213.64875-391-222891290550440/AnsiballZ_dnf.py'
Jan 06 15:10:14 compute-0 sudo[39279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:14 compute-0 python3.9[39281]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 06 15:10:18 compute-0 dbus-broker-launch[741]: Noticed file-system modification, trigger reload.
Jan 06 15:10:18 compute-0 dbus-broker-launch[741]: Noticed file-system modification, trigger reload.
Jan 06 15:10:18 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 06 15:10:18 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 06 15:10:18 compute-0 systemd[1]: Reloading.
Jan 06 15:10:19 compute-0 systemd-rc-local-generator[39343]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:10:19 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 06 15:10:19 compute-0 sudo[39279]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:20 compute-0 python3.9[40604]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:10:21 compute-0 python3.9[41389]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 06 15:10:22 compute-0 python3.9[42056]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:10:22 compute-0 sudo[42813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lchptdcmenhtlxacehmepajgqsjkofiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712222.479363-430-78850661787211/AnsiballZ_command.py'
Jan 06 15:10:22 compute-0 sudo[42813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:22 compute-0 python3.9[42832]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:10:23 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 06 15:10:23 compute-0 systemd[1]: Starting Authorization Manager...
Jan 06 15:10:23 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 06 15:10:23 compute-0 polkitd[43625]: Started polkitd version 0.117
Jan 06 15:10:23 compute-0 polkitd[43625]: Loading rules from directory /etc/polkit-1/rules.d
Jan 06 15:10:23 compute-0 polkitd[43625]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 06 15:10:23 compute-0 polkitd[43625]: Finished loading, compiling and executing 2 rules
Jan 06 15:10:23 compute-0 polkitd[43625]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 06 15:10:23 compute-0 systemd[1]: Started Authorization Manager.
Jan 06 15:10:23 compute-0 sudo[42813]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:23 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 06 15:10:23 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 06 15:10:23 compute-0 systemd[1]: man-db-cache-update.service: Consumed 6.048s CPU time.
Jan 06 15:10:23 compute-0 systemd[1]: run-r4c8b11fd92924860a08c3e02dbf32f74.service: Deactivated successfully.
Jan 06 15:10:24 compute-0 sudo[43826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbcscsnluwqaolfvenyietzzniesuuio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712223.9497964-439-249603477892284/AnsiballZ_systemd.py'
Jan 06 15:10:24 compute-0 sudo[43826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:24 compute-0 python3.9[43828]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:10:25 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 06 15:10:25 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 06 15:10:25 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 06 15:10:25 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 06 15:10:25 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 06 15:10:26 compute-0 sudo[43826]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:26 compute-0 python3.9[43989]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 06 15:10:29 compute-0 sudo[44139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfzlryfmwmrwchnvxknffkywgffrmnmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712228.638909-496-162244571149515/AnsiballZ_systemd.py'
Jan 06 15:10:29 compute-0 sudo[44139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:29 compute-0 python3.9[44141]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:10:29 compute-0 systemd[1]: Reloading.
Jan 06 15:10:29 compute-0 systemd-rc-local-generator[44164]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:10:29 compute-0 sudo[44139]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:30 compute-0 sudo[44329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgrbytzmzjfgbqzhinvjpubctmidxhyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712229.8051968-496-258841816156596/AnsiballZ_systemd.py'
Jan 06 15:10:30 compute-0 sudo[44329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:30 compute-0 python3.9[44331]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:10:30 compute-0 systemd[1]: Reloading.
Jan 06 15:10:30 compute-0 systemd-rc-local-generator[44361]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:10:30 compute-0 sudo[44329]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:31 compute-0 sudo[44518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejyxqyhtoqkzoqyrjnhqvmobbmixzgze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712231.1528625-512-233437223993371/AnsiballZ_command.py'
Jan 06 15:10:31 compute-0 sudo[44518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:31 compute-0 python3.9[44520]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:10:31 compute-0 sudo[44518]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:32 compute-0 sudo[44671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aurtjbvydeismtnqodwgbpnbqdrubocb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712232.053965-520-34017215797048/AnsiballZ_command.py'
Jan 06 15:10:32 compute-0 sudo[44671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:32 compute-0 python3.9[44673]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:10:32 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 06 15:10:32 compute-0 sudo[44671]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:33 compute-0 sudo[44824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbgktqrzlkovzophhhjxgvavkojpmoow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712232.9819214-528-270444013277915/AnsiballZ_command.py'
Jan 06 15:10:33 compute-0 sudo[44824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:33 compute-0 python3.9[44826]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:10:35 compute-0 sudo[44824]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:35 compute-0 sudo[44986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kasxelcegnmywvgdejxyopdcnzppsnaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712235.3309102-536-120141678664051/AnsiballZ_command.py'
Jan 06 15:10:35 compute-0 sudo[44986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:35 compute-0 python3.9[44988]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:10:35 compute-0 sudo[44986]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:36 compute-0 sudo[45139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-danvmmcbgdxjweueeuoebuuddtdiobdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712236.118286-544-171699643094531/AnsiballZ_systemd.py'
Jan 06 15:10:36 compute-0 sudo[45139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:36 compute-0 python3.9[45141]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 15:10:36 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 06 15:10:36 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 06 15:10:36 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 06 15:10:36 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 06 15:10:36 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 06 15:10:36 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 06 15:10:36 compute-0 sudo[45139]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:37 compute-0 sshd-session[31523]: Connection closed by 192.168.122.30 port 53254
Jan 06 15:10:37 compute-0 sshd-session[31520]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:10:37 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 06 15:10:37 compute-0 systemd[1]: session-10.scope: Consumed 2min 34.050s CPU time.
Jan 06 15:10:37 compute-0 systemd-logind[791]: Session 10 logged out. Waiting for processes to exit.
Jan 06 15:10:37 compute-0 systemd-logind[791]: Removed session 10.
Jan 06 15:10:42 compute-0 sshd-session[45171]: Accepted publickey for zuul from 192.168.122.30 port 34468 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 15:10:42 compute-0 systemd-logind[791]: New session 11 of user zuul.
Jan 06 15:10:42 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 06 15:10:42 compute-0 sshd-session[45171]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:10:44 compute-0 python3.9[45324]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:10:45 compute-0 python3.9[45478]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:10:46 compute-0 sudo[45632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdsyxdzhlrdwtoeiwxbcldlnzljqweuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712245.920016-45-188263801500375/AnsiballZ_command.py'
Jan 06 15:10:46 compute-0 sudo[45632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:46 compute-0 python3.9[45634]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:10:46 compute-0 sudo[45632]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:47 compute-0 python3.9[45785]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:10:48 compute-0 sudo[45939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfkvypmzpyvpygbukejveclfbjbuskyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712248.0690286-65-126804250584151/AnsiballZ_setup.py'
Jan 06 15:10:48 compute-0 sudo[45939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:48 compute-0 python3.9[45941]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 06 15:10:48 compute-0 sudo[45939]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:49 compute-0 sudo[46023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzjwkbpkozoxwazzadnwrjwllmctrpgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712248.0690286-65-126804250584151/AnsiballZ_dnf.py'
Jan 06 15:10:49 compute-0 sudo[46023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:49 compute-0 python3.9[46025]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 06 15:10:51 compute-0 sudo[46023]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:51 compute-0 sudo[46176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vspfviepvdhjwqsmqjsptxigvorbwrql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712251.5896466-77-70023110843061/AnsiballZ_setup.py'
Jan 06 15:10:51 compute-0 sudo[46176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:52 compute-0 python3.9[46178]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 06 15:10:52 compute-0 sudo[46176]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:53 compute-0 sudo[46347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtvgfkqzsstamispeemwngzjxmovudgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712252.7540846-88-166351167576522/AnsiballZ_file.py'
Jan 06 15:10:53 compute-0 sudo[46347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:53 compute-0 python3.9[46349]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:10:53 compute-0 sudo[46347]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:53 compute-0 sudo[46499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzzcqfuyckjcrecugexqcrtbmnitslxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712253.6681051-96-140124735510677/AnsiballZ_command.py'
Jan 06 15:10:53 compute-0 sudo[46499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:54 compute-0 python3.9[46501]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:10:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1437794984-merged.mount: Deactivated successfully.
Jan 06 15:10:54 compute-0 podman[46502]: 2026-01-06 15:10:54.30925264 +0000 UTC m=+0.095874743 system refresh
Jan 06 15:10:54 compute-0 sudo[46499]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:55 compute-0 sudo[46663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhwbrzwfwhbjjjkpxtkrmccofrowpfwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712254.6692352-104-76591043381923/AnsiballZ_stat.py'
Jan 06 15:10:55 compute-0 sudo[46663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:55 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:10:55 compute-0 python3.9[46665]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:10:55 compute-0 sudo[46663]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:55 compute-0 sudo[46786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjfmoqyzvtryntyeyyutupjkyrgbjqcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712254.6692352-104-76591043381923/AnsiballZ_copy.py'
Jan 06 15:10:55 compute-0 sudo[46786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:56 compute-0 python3.9[46788]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712254.6692352-104-76591043381923/.source.json follow=False _original_basename=podman_network_config.j2 checksum=9e2860b36eeb769bcb44a68848e38aaf759bd228 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:10:56 compute-0 sudo[46786]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:56 compute-0 sudo[46938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yurgiwhbvbyrgugottdhdpzfpxneucqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712256.277669-119-271167835188819/AnsiballZ_stat.py'
Jan 06 15:10:56 compute-0 sudo[46938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:56 compute-0 python3.9[46940]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:10:56 compute-0 sudo[46938]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:57 compute-0 sudo[47061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqxgqyslmnjtjembeualrwceoajbkmix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712256.277669-119-271167835188819/AnsiballZ_copy.py'
Jan 06 15:10:57 compute-0 sudo[47061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:57 compute-0 python3.9[47063]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767712256.277669-119-271167835188819/.source.conf follow=False _original_basename=registries.conf.j2 checksum=937bbf009263dfa93b72b20b25de6a241077d8e0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:10:57 compute-0 sudo[47061]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:58 compute-0 sudo[47213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsgkogdhppjsmrgnyifalcrdrmrtlzja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712257.7250595-135-204052211634762/AnsiballZ_ini_file.py'
Jan 06 15:10:58 compute-0 sudo[47213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:58 compute-0 python3.9[47215]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:10:58 compute-0 sudo[47213]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:58 compute-0 sudo[47365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmbuyfyvuetlttwvevxzpahsmcoeamer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712258.5110703-135-119399919742023/AnsiballZ_ini_file.py'
Jan 06 15:10:58 compute-0 sudo[47365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:58 compute-0 python3.9[47367]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:10:58 compute-0 sudo[47365]: pam_unix(sudo:session): session closed for user root
Jan 06 15:10:59 compute-0 sudo[47517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cicjgfqevvwukurzggwuklerkxiabcov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712259.2707605-135-156784686812874/AnsiballZ_ini_file.py'
Jan 06 15:10:59 compute-0 sudo[47517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:10:59 compute-0 python3.9[47519]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:10:59 compute-0 sudo[47517]: pam_unix(sudo:session): session closed for user root
Jan 06 15:11:00 compute-0 sudo[47669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwgisxqrqapcfxmvfnfysuljhteensvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712259.8828688-135-265563362808613/AnsiballZ_ini_file.py'
Jan 06 15:11:00 compute-0 sudo[47669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:11:00 compute-0 python3.9[47671]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:11:00 compute-0 sudo[47669]: pam_unix(sudo:session): session closed for user root
Jan 06 15:11:01 compute-0 python3.9[47822]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:11:02 compute-0 sudo[47974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qykqyvueayffxcmezazdowiwsyblvdsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712261.8827753-175-243683135541961/AnsiballZ_dnf.py'
Jan 06 15:11:02 compute-0 sudo[47974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:11:02 compute-0 python3.9[47976]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 06 15:11:04 compute-0 sudo[47974]: pam_unix(sudo:session): session closed for user root
Jan 06 15:11:05 compute-0 sudo[48127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bazdlohfvlgxmfrxijjdghrjxvorjuce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712265.0800571-183-95435441630363/AnsiballZ_dnf.py'
Jan 06 15:11:05 compute-0 sudo[48127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:11:05 compute-0 python3.9[48129]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 06 15:11:07 compute-0 sudo[48127]: pam_unix(sudo:session): session closed for user root
Jan 06 15:11:07 compute-0 sudo[48287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaoysyybrsqxhesvqqgswujhumekzsti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712267.5130863-193-16309209071737/AnsiballZ_dnf.py'
Jan 06 15:11:07 compute-0 sudo[48287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:11:08 compute-0 python3.9[48289]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 06 15:11:09 compute-0 sudo[48287]: pam_unix(sudo:session): session closed for user root
Jan 06 15:11:10 compute-0 sudo[48440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djxlloxxykrufmrylplyhzizprvrmdfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712269.765697-202-271163571104889/AnsiballZ_dnf.py'
Jan 06 15:11:10 compute-0 sudo[48440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:11:10 compute-0 python3.9[48442]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 06 15:11:12 compute-0 sudo[48440]: pam_unix(sudo:session): session closed for user root
Jan 06 15:11:12 compute-0 sudo[48593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sepymwjptazmcvuznsvpcmupnaawucaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712272.554867-213-120942855598436/AnsiballZ_dnf.py'
Jan 06 15:11:12 compute-0 sudo[48593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:11:13 compute-0 python3.9[48595]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 06 15:11:14 compute-0 sudo[48593]: pam_unix(sudo:session): session closed for user root
Jan 06 15:11:15 compute-0 sudo[48749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgmzgiszcdgbjwfljsevxoinpozvvgjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712274.9581892-221-181230401054485/AnsiballZ_dnf.py'
Jan 06 15:11:15 compute-0 sudo[48749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:11:15 compute-0 python3.9[48751]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 06 15:11:18 compute-0 sudo[48749]: pam_unix(sudo:session): session closed for user root
Jan 06 15:11:19 compute-0 sudo[48918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtohzimfprumxzgceodfxgkykequbxbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712278.7590911-230-77611155247480/AnsiballZ_dnf.py'
Jan 06 15:11:19 compute-0 sudo[48918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:11:19 compute-0 python3.9[48920]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 06 15:11:20 compute-0 sudo[48918]: pam_unix(sudo:session): session closed for user root
Jan 06 15:11:21 compute-0 sudo[49071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhbamzjqimpgiqpucrmukcxlosnonmhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712280.7164004-239-168087979378000/AnsiballZ_dnf.py'
Jan 06 15:11:21 compute-0 sudo[49071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:11:21 compute-0 python3.9[49073]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 06 15:11:27 compute-0 sshd-session[49085]: banner exchange: Connection from 65.49.1.80 port 61576: invalid format
Jan 06 15:11:36 compute-0 sudo[49071]: pam_unix(sudo:session): session closed for user root
Jan 06 15:11:37 compute-0 sudo[49410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqrxgcpmlgemmcztplrkgpzvekjrnpsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712297.1207383-248-165730821974137/AnsiballZ_dnf.py'
Jan 06 15:11:37 compute-0 sudo[49410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:11:37 compute-0 python3.9[49412]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 06 15:11:39 compute-0 sudo[49410]: pam_unix(sudo:session): session closed for user root
Jan 06 15:11:39 compute-0 sudo[49566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnbuzudzcygyimjoilbkupoeeqthbaks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712299.5693483-258-51852042345358/AnsiballZ_dnf.py'
Jan 06 15:11:39 compute-0 sudo[49566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:11:40 compute-0 python3.9[49568]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['device-mapper-multipath'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 06 15:11:41 compute-0 sudo[49566]: pam_unix(sudo:session): session closed for user root
Jan 06 15:11:42 compute-0 sudo[49723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azvupxwpmasgnltqmzienulqtcyliuvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712302.2324169-269-203673020284245/AnsiballZ_file.py'
Jan 06 15:11:42 compute-0 sudo[49723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:11:42 compute-0 python3.9[49725]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:11:42 compute-0 sudo[49723]: pam_unix(sudo:session): session closed for user root
Jan 06 15:11:43 compute-0 sudo[49898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuyxqqjmhktsncgsgsxrlqrduwrytqsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712303.0430837-277-208319778942881/AnsiballZ_stat.py'
Jan 06 15:11:43 compute-0 sudo[49898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:11:43 compute-0 python3.9[49900]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:11:43 compute-0 sudo[49898]: pam_unix(sudo:session): session closed for user root
Jan 06 15:11:44 compute-0 sudo[50021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dancifjczmohenvdecizguefhtdpnnbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712303.0430837-277-208319778942881/AnsiballZ_copy.py'
Jan 06 15:11:44 compute-0 sudo[50021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:11:44 compute-0 python3.9[50023]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1767712303.0430837-277-208319778942881/.source.json _original_basename=.vs5b4xnn follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:11:44 compute-0 sudo[50021]: pam_unix(sudo:session): session closed for user root
Jan 06 15:11:45 compute-0 sudo[50173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gejlehbhpcwejesjjbawnebzfnkrhxpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712304.7808537-295-23150899366527/AnsiballZ_podman_image.py'
Jan 06 15:11:45 compute-0 sudo[50173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:11:45 compute-0 python3.9[50175]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 06 15:11:45 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:11:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat4150906321-lower\x2dmapped.mount: Deactivated successfully.
Jan 06 15:11:56 compute-0 podman[50186]: 2026-01-06 15:11:56.201438693 +0000 UTC m=+10.447677780 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 06 15:11:56 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:11:56 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:11:56 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:11:56 compute-0 sudo[50173]: pam_unix(sudo:session): session closed for user root
Jan 06 15:11:57 compute-0 sudo[50481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrgjzmoyylpawkxhqocayqnfhyoxjtjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712316.8882165-306-2518991341101/AnsiballZ_podman_image.py'
Jan 06 15:11:57 compute-0 sudo[50481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:11:57 compute-0 python3.9[50483]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 06 15:11:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:12:13 compute-0 podman[50495]: 2026-01-06 15:12:13.940798388 +0000 UTC m=+16.487106901 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 06 15:12:13 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:12:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:12:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:12:14 compute-0 sudo[50481]: pam_unix(sudo:session): session closed for user root
Jan 06 15:12:15 compute-0 sudo[50791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orixfvavlijhxdlltuwulkuomurghcyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712335.0634995-316-259213759630129/AnsiballZ_podman_image.py'
Jan 06 15:12:15 compute-0 sudo[50791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:12:15 compute-0 python3.9[50793]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 06 15:12:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:12:33 compute-0 podman[50805]: 2026-01-06 15:12:33.285605353 +0000 UTC m=+17.601248344 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 06 15:12:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:12:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:12:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:12:33 compute-0 sudo[50791]: pam_unix(sudo:session): session closed for user root
Jan 06 15:12:34 compute-0 sudo[51060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziwcvhzcryvormsfnoiasjndlwmuhxef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712353.9707124-327-56180307868418/AnsiballZ_podman_image.py'
Jan 06 15:12:34 compute-0 sudo[51060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:12:34 compute-0 python3.9[51062]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 06 15:12:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:12:57 compute-0 podman[51074]: 2026-01-06 15:12:57.704583324 +0000 UTC m=+23.079611317 image pull 6e61bfccaf21ee9962f8af7b3bc33737123ae362fb340f43cd517263f3ab794c quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Jan 06 15:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:12:58 compute-0 sudo[51060]: pam_unix(sudo:session): session closed for user root
Jan 06 15:12:58 compute-0 sudo[51392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqdlrygirbjbrmjsmtsfnlcdkxcdwcmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712378.200115-327-245793431145339/AnsiballZ_podman_image.py'
Jan 06 15:12:58 compute-0 sudo[51392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:12:58 compute-0 python3.9[51394]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 06 15:12:58 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:13:00 compute-0 podman[51406]: 2026-01-06 15:13:00.308430847 +0000 UTC m=+1.536588696 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Jan 06 15:13:00 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:13:00 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:13:00 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:13:00 compute-0 sudo[51392]: pam_unix(sudo:session): session closed for user root
Jan 06 15:13:01 compute-0 sudo[51675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvnhqjwtpxvousrwduwfmdwzrcafjcgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712380.943965-343-233743198186369/AnsiballZ_podman_image.py'
Jan 06 15:13:01 compute-0 sudo[51675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:13:01 compute-0 python3.9[51677]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 06 15:13:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:13:07 compute-0 podman[51689]: 2026-01-06 15:13:07.850592904 +0000 UTC m=+6.176250920 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Jan 06 15:13:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:13:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:13:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:13:08 compute-0 sudo[51675]: pam_unix(sudo:session): session closed for user root
Jan 06 15:13:08 compute-0 sudo[51943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjukvpucdbvxyeedtcfafzfmvscaxaat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712388.4552028-343-11369752948011/AnsiballZ_podman_image.py'
Jan 06 15:13:08 compute-0 sudo[51943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:13:09 compute-0 python3.9[51945]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 06 15:13:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:13:21 compute-0 podman[51958]: 2026-01-06 15:13:21.897886328 +0000 UTC m=+12.755013988 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Jan 06 15:13:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:13:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:13:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:13:22 compute-0 sudo[51943]: pam_unix(sudo:session): session closed for user root
Jan 06 15:13:22 compute-0 sshd-session[45174]: Connection closed by 192.168.122.30 port 34468
Jan 06 15:13:22 compute-0 sshd-session[45171]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:13:22 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 06 15:13:22 compute-0 systemd[1]: session-11.scope: Consumed 3min 16.602s CPU time.
Jan 06 15:13:22 compute-0 systemd-logind[791]: Session 11 logged out. Waiting for processes to exit.
Jan 06 15:13:22 compute-0 systemd-logind[791]: Removed session 11.
Jan 06 15:13:22 compute-0 sshd-session[52220]: error: kex_exchange_identification: read: Connection reset by peer
Jan 06 15:13:22 compute-0 sshd-session[52220]: Connection reset by 45.140.17.97 port 46944
Jan 06 15:13:28 compute-0 sshd-session[52221]: Accepted publickey for zuul from 192.168.122.30 port 37966 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 15:13:28 compute-0 systemd-logind[791]: New session 12 of user zuul.
Jan 06 15:13:28 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 06 15:13:28 compute-0 sshd-session[52221]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:13:29 compute-0 python3.9[52374]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:13:30 compute-0 sudo[52528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arhdfgggcfypjctqeootualqpgnhoxcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712410.467953-31-48781927547262/AnsiballZ_getent.py'
Jan 06 15:13:30 compute-0 sudo[52528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:13:31 compute-0 python3.9[52530]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 06 15:13:31 compute-0 sudo[52528]: pam_unix(sudo:session): session closed for user root
Jan 06 15:13:31 compute-0 sudo[52681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymitaomptxnmohfsyaewuwqfgbeuzchf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712411.4225864-39-217753779037791/AnsiballZ_group.py'
Jan 06 15:13:32 compute-0 sudo[52681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:13:32 compute-0 python3.9[52683]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 06 15:13:32 compute-0 groupadd[52684]: group added to /etc/group: name=openvswitch, GID=42476
Jan 06 15:13:32 compute-0 groupadd[52684]: group added to /etc/gshadow: name=openvswitch
Jan 06 15:13:32 compute-0 groupadd[52684]: new group: name=openvswitch, GID=42476
Jan 06 15:13:32 compute-0 sudo[52681]: pam_unix(sudo:session): session closed for user root
Jan 06 15:13:33 compute-0 sudo[52839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zteozwpikizjmyxitrlthpijuwfcrihn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712412.585863-47-145054072460580/AnsiballZ_user.py'
Jan 06 15:13:33 compute-0 sudo[52839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:13:33 compute-0 python3.9[52841]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 06 15:13:33 compute-0 useradd[52843]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 06 15:13:33 compute-0 useradd[52843]: add 'openvswitch' to group 'hugetlbfs'
Jan 06 15:13:33 compute-0 useradd[52843]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 06 15:13:33 compute-0 sudo[52839]: pam_unix(sudo:session): session closed for user root
Jan 06 15:13:34 compute-0 sudo[52999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cudtdyyaigyuyprutxtyylmikmvszgzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712414.0423539-57-250376923652549/AnsiballZ_setup.py'
Jan 06 15:13:34 compute-0 sudo[52999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:13:34 compute-0 python3.9[53001]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 06 15:13:34 compute-0 sudo[52999]: pam_unix(sudo:session): session closed for user root
Jan 06 15:13:35 compute-0 sudo[53083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifugblrnrnzttmhduhjxsyonihqpxtel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712414.0423539-57-250376923652549/AnsiballZ_dnf.py'
Jan 06 15:13:35 compute-0 sudo[53083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:13:35 compute-0 python3.9[53085]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 06 15:13:37 compute-0 sudo[53083]: pam_unix(sudo:session): session closed for user root
Jan 06 15:13:38 compute-0 sudo[53245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nezyyrvzznsohdgrbhrxrszzqmcwssul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712417.7113402-71-124165294568447/AnsiballZ_dnf.py'
Jan 06 15:13:38 compute-0 sudo[53245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:13:38 compute-0 python3.9[53247]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 06 15:13:54 compute-0 kernel: SELinux:  Converting 2732 SID table entries...
Jan 06 15:13:54 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 06 15:13:54 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 06 15:13:54 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 06 15:13:54 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 06 15:13:54 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 06 15:13:54 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 06 15:13:54 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 06 15:13:54 compute-0 groupadd[53272]: group added to /etc/group: name=unbound, GID=994
Jan 06 15:13:55 compute-0 groupadd[53272]: group added to /etc/gshadow: name=unbound
Jan 06 15:13:55 compute-0 groupadd[53272]: new group: name=unbound, GID=994
Jan 06 15:13:55 compute-0 useradd[53279]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 06 15:13:55 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 06 15:13:55 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 06 15:13:57 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 06 15:13:58 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 06 15:13:58 compute-0 systemd[1]: Reloading.
Jan 06 15:13:58 compute-0 systemd-sysv-generator[53780]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:13:58 compute-0 systemd-rc-local-generator[53774]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:13:58 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 06 15:13:59 compute-0 sudo[53245]: pam_unix(sudo:session): session closed for user root
Jan 06 15:13:59 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 06 15:13:59 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 06 15:13:59 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.113s CPU time.
Jan 06 15:13:59 compute-0 systemd[1]: run-rf745bc049b514bc7a570b1b90ed8c8bc.service: Deactivated successfully.
Jan 06 15:13:59 compute-0 sudo[54346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrppsovqrqihzzbcbmalphvunxlpqhhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712439.2579868-79-188399077100739/AnsiballZ_systemd.py'
Jan 06 15:13:59 compute-0 sudo[54346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:00 compute-0 python3.9[54348]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 06 15:14:00 compute-0 systemd[1]: Reloading.
Jan 06 15:14:00 compute-0 systemd-rc-local-generator[54374]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:14:00 compute-0 systemd-sysv-generator[54380]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:14:00 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 06 15:14:00 compute-0 chown[54390]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 06 15:14:00 compute-0 ovs-ctl[54395]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 06 15:14:00 compute-0 ovs-ctl[54395]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 06 15:14:00 compute-0 ovs-ctl[54395]: Starting ovsdb-server [  OK  ]
Jan 06 15:14:00 compute-0 ovs-vsctl[54444]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 06 15:14:01 compute-0 ovs-vsctl[54464]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"c958bb1c-18b4-4d04-b6d7-d8a86dfc32de\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 06 15:14:01 compute-0 ovs-ctl[54395]: Configuring Open vSwitch system IDs [  OK  ]
Jan 06 15:14:01 compute-0 ovs-ctl[54395]: Enabling remote OVSDB managers [  OK  ]
Jan 06 15:14:01 compute-0 ovs-vsctl[54470]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 06 15:14:01 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 06 15:14:01 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 06 15:14:01 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 06 15:14:01 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 06 15:14:01 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 06 15:14:01 compute-0 ovs-ctl[54515]: Inserting openvswitch module [  OK  ]
Jan 06 15:14:01 compute-0 ovs-ctl[54484]: Starting ovs-vswitchd [  OK  ]
Jan 06 15:14:01 compute-0 ovs-vsctl[54532]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 06 15:14:01 compute-0 ovs-ctl[54484]: Enabling remote OVSDB managers [  OK  ]
Jan 06 15:14:01 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 06 15:14:01 compute-0 systemd[1]: Starting Open vSwitch...
Jan 06 15:14:01 compute-0 systemd[1]: Finished Open vSwitch.
Jan 06 15:14:01 compute-0 sudo[54346]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:02 compute-0 python3.9[54684]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:14:03 compute-0 sudo[54834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aihysddnscbnxfgiqcuiowztxmofjlca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712442.8983312-97-107074231445509/AnsiballZ_sefcontext.py'
Jan 06 15:14:03 compute-0 sudo[54834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:03 compute-0 python3.9[54836]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 06 15:14:05 compute-0 kernel: SELinux:  Converting 2746 SID table entries...
Jan 06 15:14:05 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 06 15:14:05 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 06 15:14:05 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 06 15:14:05 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 06 15:14:05 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 06 15:14:05 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 06 15:14:05 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 06 15:14:05 compute-0 sudo[54834]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:06 compute-0 python3.9[54991]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:14:07 compute-0 sudo[55147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqrxxlrahlxryzlwpyjzhxgbovalmlso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712446.9446862-115-27317756595850/AnsiballZ_dnf.py'
Jan 06 15:14:07 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 06 15:14:07 compute-0 sudo[55147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:07 compute-0 python3.9[55149]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 06 15:14:08 compute-0 sudo[55147]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:09 compute-0 sudo[55300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xchekgqamzjkyeqoazbvjzctusthvmnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712449.1299193-123-79842692874826/AnsiballZ_command.py'
Jan 06 15:14:09 compute-0 sudo[55300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:09 compute-0 python3.9[55302]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:14:10 compute-0 sudo[55300]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:11 compute-0 sudo[55587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huomflsahorxhukarjengnexfxnrwrin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712451.047022-131-120457898477497/AnsiballZ_file.py'
Jan 06 15:14:11 compute-0 sudo[55587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:11 compute-0 python3.9[55589]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 06 15:14:11 compute-0 sudo[55587]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:12 compute-0 python3.9[55739]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:14:13 compute-0 sudo[55891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsatmkpjgzttejzilijzuhomfyiesgwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712453.0561376-147-123141911321494/AnsiballZ_dnf.py'
Jan 06 15:14:13 compute-0 sudo[55891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:13 compute-0 python3.9[55893]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 06 15:14:15 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 06 15:14:15 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 06 15:14:15 compute-0 systemd[1]: Reloading.
Jan 06 15:14:15 compute-0 systemd-rc-local-generator[55932]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:14:15 compute-0 systemd-sysv-generator[55936]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:14:15 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 06 15:14:16 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 06 15:14:16 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 06 15:14:16 compute-0 systemd[1]: run-rf7c24dee84c74d7abe4ac5dfa55ccb89.service: Deactivated successfully.
Jan 06 15:14:16 compute-0 sudo[55891]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:17 compute-0 sudo[56208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pupvhideeeahmpopfapsfawqdeogtznv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712456.9258578-155-168228589921652/AnsiballZ_systemd.py'
Jan 06 15:14:17 compute-0 sudo[56208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:17 compute-0 python3.9[56210]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 15:14:17 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 06 15:14:17 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 06 15:14:17 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 06 15:14:17 compute-0 NetworkManager[7190]: <info>  [1767712457.6254] caught SIGTERM, shutting down normally.
Jan 06 15:14:17 compute-0 systemd[1]: Stopping Network Manager...
Jan 06 15:14:17 compute-0 NetworkManager[7190]: <info>  [1767712457.6276] dhcp4 (eth0): canceled DHCP transaction
Jan 06 15:14:17 compute-0 NetworkManager[7190]: <info>  [1767712457.6276] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 06 15:14:17 compute-0 NetworkManager[7190]: <info>  [1767712457.6276] dhcp4 (eth0): state changed no lease
Jan 06 15:14:17 compute-0 NetworkManager[7190]: <info>  [1767712457.6279] manager: NetworkManager state is now CONNECTED_SITE
Jan 06 15:14:17 compute-0 NetworkManager[7190]: <info>  [1767712457.6353] exiting (success)
Jan 06 15:14:17 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 06 15:14:17 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 06 15:14:17 compute-0 systemd[1]: Stopped Network Manager.
Jan 06 15:14:17 compute-0 systemd[1]: NetworkManager.service: Consumed 20.829s CPU time, 4.1M memory peak, read 0B from disk, written 25.0K to disk.
Jan 06 15:14:17 compute-0 systemd[1]: Starting Network Manager...
Jan 06 15:14:17 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.7125] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:9618711b-fe4f-49ab-b47b-caab4b22688f)
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.7136] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.7218] manager[0x55918145c000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 06 15:14:17 compute-0 systemd[1]: Starting Hostname Service...
Jan 06 15:14:17 compute-0 systemd[1]: Started Hostname Service.
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8227] hostname: hostname: using hostnamed
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8229] hostname: static hostname changed from (none) to "compute-0"
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8234] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8238] manager[0x55918145c000]: rfkill: Wi-Fi hardware radio set enabled
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8238] manager[0x55918145c000]: rfkill: WWAN hardware radio set enabled
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8261] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-ovs.so)
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8270] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8271] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8271] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8272] manager: Networking is enabled by state file
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8274] settings: Loaded settings plugin: keyfile (internal)
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8277] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8303] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8311] dhcp: init: Using DHCP client 'internal'
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8314] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8318] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8323] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8330] device (lo): Activation: starting connection 'lo' (0d776732-e25f-45b4-9be2-41af4991938d)
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8336] device (eth0): carrier: link connected
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8339] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8344] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8344] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8350] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8355] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8360] device (eth1): carrier: link connected
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8364] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8368] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (ff0b2ac9-e6ee-550e-a6e2-60d885b28b26) (indicated)
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8369] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8373] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8379] device (eth1): Activation: starting connection 'ci-private-network' (ff0b2ac9-e6ee-550e-a6e2-60d885b28b26)
Jan 06 15:14:17 compute-0 systemd[1]: Started Network Manager.
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8389] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8396] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8398] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8401] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8403] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8405] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8407] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8409] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8411] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8416] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8419] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8430] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8442] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8462] dhcp4 (eth0): state changed new lease, address=38.102.83.193
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8471] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 06 15:14:17 compute-0 sudo[56208]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8804] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8825] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8828] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8830] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8835] device (lo): Activation: successful, device activated.
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8840] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8844] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8846] device (eth1): Activation: successful, device activated.
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8856] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8858] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8861] manager: NetworkManager state is now CONNECTED_SITE
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8864] device (eth0): Activation: successful, device activated.
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8869] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 06 15:14:17 compute-0 NetworkManager[56218]: <info>  [1767712457.8872] manager: startup complete
Jan 06 15:14:17 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 06 15:14:17 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 06 15:14:18 compute-0 sudo[56435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuopgetvoamqwwrdrvrdlygvjtpmsrrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712458.1408265-163-145552819293076/AnsiballZ_dnf.py'
Jan 06 15:14:18 compute-0 sudo[56435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:18 compute-0 python3.9[56437]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 06 15:14:23 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 06 15:14:23 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 06 15:14:23 compute-0 systemd[1]: Reloading.
Jan 06 15:14:23 compute-0 systemd-sysv-generator[56494]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:14:23 compute-0 systemd-rc-local-generator[56491]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:14:23 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 06 15:14:24 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 06 15:14:24 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 06 15:14:24 compute-0 systemd[1]: run-r902a21f79dba4f6c807d3f425a113a02.service: Deactivated successfully.
Jan 06 15:14:24 compute-0 sudo[56435]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:25 compute-0 sudo[56897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuspkoapsxhetratmymmbquuzmxozurb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712465.0951154-175-187255718287968/AnsiballZ_stat.py'
Jan 06 15:14:25 compute-0 sudo[56897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:25 compute-0 python3.9[56899]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:14:25 compute-0 sudo[56897]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:26 compute-0 sudo[57049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzocmxhxtexckqwsetjefhhifzgtsykd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712465.993223-184-97068104800890/AnsiballZ_ini_file.py'
Jan 06 15:14:26 compute-0 sudo[57049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:26 compute-0 python3.9[57051]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:14:26 compute-0 sudo[57049]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:27 compute-0 sudo[57203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eadvmclkxtdwiphernyfbacloxagglen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712467.0174053-194-272947415685268/AnsiballZ_ini_file.py'
Jan 06 15:14:27 compute-0 sudo[57203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:27 compute-0 python3.9[57205]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:14:27 compute-0 sudo[57203]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:28 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 06 15:14:28 compute-0 sudo[57355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdxssnfbrikjrzoitzumxjvkhmfpotni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712467.8657658-194-1649872597188/AnsiballZ_ini_file.py'
Jan 06 15:14:28 compute-0 sudo[57355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:28 compute-0 python3.9[57357]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:14:28 compute-0 sudo[57355]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:29 compute-0 sudo[57508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upevjtvjdwklwnhqkiidiqtotpctrmyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712468.6962876-209-154629219313927/AnsiballZ_ini_file.py'
Jan 06 15:14:29 compute-0 sudo[57508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:29 compute-0 python3.9[57510]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:14:29 compute-0 sudo[57508]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:29 compute-0 sudo[57660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gevnxvduqwyqjapygsjlcsisvvydsxym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712469.429401-209-96637046264401/AnsiballZ_ini_file.py'
Jan 06 15:14:29 compute-0 sudo[57660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:29 compute-0 python3.9[57662]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:14:29 compute-0 sudo[57660]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:30 compute-0 sudo[57812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moppybyeoxggnevtpdwpumirgkeiurlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712470.3084457-224-45604094460384/AnsiballZ_stat.py'
Jan 06 15:14:30 compute-0 sudo[57812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:30 compute-0 python3.9[57814]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:14:30 compute-0 sudo[57812]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:31 compute-0 sudo[57935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atdzwzbawnscsoicqxlirqtfbkkbdoll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712470.3084457-224-45604094460384/AnsiballZ_copy.py'
Jan 06 15:14:31 compute-0 sudo[57935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:31 compute-0 python3.9[57937]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1767712470.3084457-224-45604094460384/.source _original_basename=.stvhx6ht follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:14:31 compute-0 sudo[57935]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:32 compute-0 sudo[58087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgoxfvtvyxpcueivtuwfujzdwmzlvqiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712471.8076475-239-255986489393220/AnsiballZ_file.py'
Jan 06 15:14:32 compute-0 sudo[58087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:32 compute-0 python3.9[58089]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:14:32 compute-0 sudo[58087]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:33 compute-0 sudo[58239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wynlsxnkvuegivzmycwenuafgrayaafa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712472.6307151-247-220067421658455/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 06 15:14:33 compute-0 sudo[58239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:33 compute-0 python3.9[58241]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 06 15:14:33 compute-0 sudo[58239]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:33 compute-0 sudo[58391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phohvpzqdndtfpompeowfoumakgbwwuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712473.602276-256-248834367255324/AnsiballZ_file.py'
Jan 06 15:14:33 compute-0 sudo[58391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:34 compute-0 python3.9[58393]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:14:34 compute-0 sudo[58391]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:35 compute-0 sudo[58543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jacrpevrbjmiswfooshtcdezovmhvslp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712474.567625-266-82757868628155/AnsiballZ_stat.py'
Jan 06 15:14:35 compute-0 sudo[58543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:35 compute-0 sudo[58543]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:35 compute-0 sudo[58666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojqwabrkaitmgjcitcmorzkjvpmhsili ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712474.567625-266-82757868628155/AnsiballZ_copy.py'
Jan 06 15:14:35 compute-0 sudo[58666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:35 compute-0 sudo[58666]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:36 compute-0 sudo[58818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqggbkciosmxqpgsrhdtlzufougzjviz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712476.2477248-281-207797417074556/AnsiballZ_slurp.py'
Jan 06 15:14:36 compute-0 sudo[58818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:36 compute-0 python3.9[58820]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 06 15:14:36 compute-0 sudo[58818]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:38 compute-0 sshd-session[57475]: banner exchange: Connection from 106.13.75.140 port 38811: invalid format
Jan 06 15:14:38 compute-0 sudo[58993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lulyhylawklfcrqllkmvysskhooodlyp ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712477.2377305-290-47145768024593/async_wrapper.py j203045093809 300 /home/zuul/.ansible/tmp/ansible-tmp-1767712477.2377305-290-47145768024593/AnsiballZ_edpm_os_net_config.py _'
Jan 06 15:14:38 compute-0 sudo[58993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:38 compute-0 ansible-async_wrapper.py[58995]: Invoked with j203045093809 300 /home/zuul/.ansible/tmp/ansible-tmp-1767712477.2377305-290-47145768024593/AnsiballZ_edpm_os_net_config.py _
Jan 06 15:14:38 compute-0 ansible-async_wrapper.py[58998]: Starting module and watcher
Jan 06 15:14:38 compute-0 ansible-async_wrapper.py[58998]: Start watching 58999 (300)
Jan 06 15:14:38 compute-0 ansible-async_wrapper.py[58999]: Start module (58999)
Jan 06 15:14:38 compute-0 ansible-async_wrapper.py[58995]: Return async_wrapper task started.
Jan 06 15:14:38 compute-0 sudo[58993]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:38 compute-0 python3.9[59000]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 06 15:14:39 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 06 15:14:39 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 06 15:14:39 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 06 15:14:39 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 06 15:14:39 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 06 15:14:40 compute-0 NetworkManager[56218]: <info>  [1767712480.9512] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59001 uid=0 result="success"
Jan 06 15:14:40 compute-0 NetworkManager[56218]: <info>  [1767712480.9552] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59001 uid=0 result="success"
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0517] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0523] audit: op="connection-add" uuid="f84b37c6-9dcd-402f-b749-800ec2d501b7" name="br-ex-br" pid=59001 uid=0 result="success"
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0553] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0557] audit: op="connection-add" uuid="78bd236d-b39c-4219-85cf-8fc869fe7577" name="br-ex-port" pid=59001 uid=0 result="success"
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0579] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0583] audit: op="connection-add" uuid="48b4f80d-15b7-4f0b-bb28-6dceec796b85" name="eth1-port" pid=59001 uid=0 result="success"
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0607] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0611] audit: op="connection-add" uuid="02e096ea-c9d4-4e43-b811-dd1f70cd82e6" name="vlan20-port" pid=59001 uid=0 result="success"
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0638] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0643] audit: op="connection-add" uuid="b50ee60f-21db-4a12-8655-f26959d9034a" name="vlan21-port" pid=59001 uid=0 result="success"
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0666] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0670] audit: op="connection-add" uuid="4296d145-51c9-472e-be76-c719cad9b559" name="vlan22-port" pid=59001 uid=0 result="success"
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0712] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp" pid=59001 uid=0 result="success"
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0746] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0751] audit: op="connection-add" uuid="8682ce81-e2bb-432d-8e28-2385a862534a" name="br-ex-if" pid=59001 uid=0 result="success"
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0839] audit: op="connection-update" uuid="ff0b2ac9-e6ee-550e-a6e2-60d885b28b26" name="ci-private-network" args="ipv6.addr-gen-mode,ipv6.routing-rules,ipv6.addresses,ipv6.method,ipv6.routes,ipv6.dns,ovs-external-ids.data,ipv4.never-default,ipv4.routing-rules,ipv4.addresses,ipv4.method,ipv4.routes,ipv4.dns,connection.slave-type,connection.port-type,connection.master,connection.controller,connection.timestamp,ovs-interface.type" pid=59001 uid=0 result="success"
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0864] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0865] audit: op="connection-add" uuid="f79b8687-7c4a-40cf-becf-c624d67d3043" name="vlan20-if" pid=59001 uid=0 result="success"
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0889] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0890] audit: op="connection-add" uuid="2b397b04-4ad3-453c-b311-48188d62fedb" name="vlan21-if" pid=59001 uid=0 result="success"
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0913] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0914] audit: op="connection-add" uuid="99c9b54e-77b6-4145-b5e3-ecd6c7569362" name="vlan22-if" pid=59001 uid=0 result="success"
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0933] audit: op="connection-delete" uuid="0612abbc-cf75-33f1-b6b4-e54eeee8ddc7" name="Wired connection 1" pid=59001 uid=0 result="success"
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0953] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <warn>  [1767712481.0955] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0961] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0965] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (f84b37c6-9dcd-402f-b749-800ec2d501b7)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0966] audit: op="connection-activate" uuid="f84b37c6-9dcd-402f-b749-800ec2d501b7" name="br-ex-br" pid=59001 uid=0 result="success"
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0967] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <warn>  [1767712481.0968] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0974] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0978] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (78bd236d-b39c-4219-85cf-8fc869fe7577)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0980] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <warn>  [1767712481.0981] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0985] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0990] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (48b4f80d-15b7-4f0b-bb28-6dceec796b85)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0992] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <warn>  [1767712481.0993] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.0999] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1002] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (02e096ea-c9d4-4e43-b811-dd1f70cd82e6)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1004] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <warn>  [1767712481.1005] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1009] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1013] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (b50ee60f-21db-4a12-8655-f26959d9034a)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1015] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <warn>  [1767712481.1016] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1023] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1026] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (4296d145-51c9-472e-be76-c719cad9b559)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1027] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1029] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1032] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1041] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <warn>  [1767712481.1042] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1045] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1049] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (8682ce81-e2bb-432d-8e28-2385a862534a)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1050] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1054] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1056] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1057] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1059] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1073] device (eth1): disconnecting for new activation request.
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1073] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1076] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1078] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1080] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1084] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <warn>  [1767712481.1085] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1088] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1092] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (f79b8687-7c4a-40cf-becf-c624d67d3043)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1093] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1096] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1099] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1101] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1104] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <warn>  [1767712481.1105] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1108] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1113] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (2b397b04-4ad3-453c-b311-48188d62fedb)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1114] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1116] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1118] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1119] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1122] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <warn>  [1767712481.1122] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1126] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1131] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (99c9b54e-77b6-4145-b5e3-ecd6c7569362)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1132] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1135] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1136] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1138] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1140] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1154] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,connection.autoconnect-priority" pid=59001 uid=0 result="success"
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1156] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1159] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1161] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1168] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1172] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1176] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1179] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1181] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1185] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1189] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1193] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1195] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1199] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1204] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1207] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1209] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1215] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1221] dhcp4 (eth0): canceled DHCP transaction
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1221] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1222] dhcp4 (eth0): state changed no lease
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1224] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1237] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1241] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59001 uid=0 result="fail" reason="Device is not activated"
Jan 06 15:14:41 compute-0 kernel: Timeout policy base is empty
Jan 06 15:14:41 compute-0 systemd-udevd[59007]: Network interface NamePolicy= disabled on kernel command line.
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1287] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1295] device (eth1): disconnecting for new activation request.
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1297] audit: op="connection-activate" uuid="ff0b2ac9-e6ee-550e-a6e2-60d885b28b26" name="ci-private-network" pid=59001 uid=0 result="success"
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1298] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1302] dhcp4 (eth0): state changed new lease, address=38.102.83.193
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1307] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 06 15:14:41 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1356] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59001 uid=0 result="success"
Jan 06 15:14:41 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1516] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1689] device (eth1): Activation: starting connection 'ci-private-network' (ff0b2ac9-e6ee-550e-a6e2-60d885b28b26)
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1694] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1702] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1705] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1718] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1722] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1726] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1727] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1729] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1730] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1732] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1737] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1744] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1747] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1750] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1753] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1758] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1762] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1766] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1769] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1774] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1778] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1785] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1790] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 kernel: br-ex: entered promiscuous mode
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1854] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1858] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1864] device (eth1): Activation: successful, device activated.
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1932] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1944] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 kernel: vlan22: entered promiscuous mode
Jan 06 15:14:41 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1996] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.1998] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.2006] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 06 15:14:41 compute-0 kernel: vlan21: entered promiscuous mode
Jan 06 15:14:41 compute-0 systemd-udevd[59006]: Network interface NamePolicy= disabled on kernel command line.
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.2115] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.2133] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.2150] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.2151] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.2155] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 06 15:14:41 compute-0 kernel: vlan20: entered promiscuous mode
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.2216] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.2228] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.2249] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.2266] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.2271] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.2360] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.2372] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.2391] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.2393] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 06 15:14:41 compute-0 NetworkManager[56218]: <info>  [1767712481.2397] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 06 15:14:42 compute-0 sudo[59333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwcbdlotbonrxljblymamyxqfmwtwjsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712481.4331985-290-253382433792477/AnsiballZ_async_status.py'
Jan 06 15:14:42 compute-0 sudo[59333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:42 compute-0 python3.9[59335]: ansible-ansible.legacy.async_status Invoked with jid=j203045093809.58995 mode=status _async_dir=/root/.ansible_async
Jan 06 15:14:42 compute-0 sudo[59333]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:42 compute-0 NetworkManager[56218]: <info>  [1767712482.3690] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59001 uid=0 result="success"
Jan 06 15:14:42 compute-0 NetworkManager[56218]: <info>  [1767712482.6564] checkpoint[0x559181432950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 06 15:14:42 compute-0 NetworkManager[56218]: <info>  [1767712482.6570] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59001 uid=0 result="success"
Jan 06 15:14:43 compute-0 NetworkManager[56218]: <info>  [1767712483.0757] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59001 uid=0 result="success"
Jan 06 15:14:43 compute-0 NetworkManager[56218]: <info>  [1767712483.0774] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59001 uid=0 result="success"
Jan 06 15:14:43 compute-0 ansible-async_wrapper.py[58998]: 58999 still running (300)
Jan 06 15:14:43 compute-0 NetworkManager[56218]: <info>  [1767712483.3160] audit: op="networking-control" arg="global-dns-configuration" pid=59001 uid=0 result="success"
Jan 06 15:14:43 compute-0 NetworkManager[56218]: <info>  [1767712483.3198] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 06 15:14:43 compute-0 NetworkManager[56218]: <info>  [1767712483.3243] audit: op="networking-control" arg="global-dns-configuration" pid=59001 uid=0 result="success"
Jan 06 15:14:43 compute-0 NetworkManager[56218]: <info>  [1767712483.3287] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59001 uid=0 result="success"
Jan 06 15:14:43 compute-0 NetworkManager[56218]: <info>  [1767712483.5047] checkpoint[0x559181432a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 06 15:14:43 compute-0 NetworkManager[56218]: <info>  [1767712483.5060] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59001 uid=0 result="success"
Jan 06 15:14:43 compute-0 ansible-async_wrapper.py[58999]: Module complete (58999)
Jan 06 15:14:45 compute-0 sudo[59440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzkcmavfgfzodlgrrvqmvfppuxemnjcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712481.4331985-290-253382433792477/AnsiballZ_async_status.py'
Jan 06 15:14:45 compute-0 sudo[59440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:45 compute-0 python3.9[59442]: ansible-ansible.legacy.async_status Invoked with jid=j203045093809.58995 mode=status _async_dir=/root/.ansible_async
Jan 06 15:14:45 compute-0 sudo[59440]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:46 compute-0 sudo[59539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsqijasqflzdpfmuercfkplehidgfsjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712481.4331985-290-253382433792477/AnsiballZ_async_status.py'
Jan 06 15:14:46 compute-0 sudo[59539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:46 compute-0 python3.9[59541]: ansible-ansible.legacy.async_status Invoked with jid=j203045093809.58995 mode=cleanup _async_dir=/root/.ansible_async
Jan 06 15:14:46 compute-0 sudo[59539]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:47 compute-0 sudo[59691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sujcozuxdrvflfjuttwuaatskjgzbhyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712486.7314904-317-66053434607507/AnsiballZ_stat.py'
Jan 06 15:14:47 compute-0 sudo[59691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:47 compute-0 python3.9[59693]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:14:47 compute-0 sudo[59691]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:47 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 06 15:14:47 compute-0 sudo[59814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phhwciahbymknofixdaduyfwlrswmtgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712486.7314904-317-66053434607507/AnsiballZ_copy.py'
Jan 06 15:14:47 compute-0 sudo[59814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:48 compute-0 python3.9[59818]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767712486.7314904-317-66053434607507/.source.returncode _original_basename=.i092ldcx follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:14:48 compute-0 sudo[59814]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:48 compute-0 ansible-async_wrapper.py[58998]: Done in kid B.
Jan 06 15:14:48 compute-0 sudo[59968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnwadhuwflizbmvfdkrcpnpwchlzjira ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712488.340696-333-116158435182654/AnsiballZ_stat.py'
Jan 06 15:14:48 compute-0 sudo[59968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:48 compute-0 python3.9[59970]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:14:48 compute-0 sudo[59968]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:49 compute-0 sudo[60092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufjrrshtkknlnztrchsusjavypxyjxcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712488.340696-333-116158435182654/AnsiballZ_copy.py'
Jan 06 15:14:49 compute-0 sudo[60092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:49 compute-0 python3.9[60094]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767712488.340696-333-116158435182654/.source.cfg _original_basename=.kn0emipx follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:14:49 compute-0 sudo[60092]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:50 compute-0 sudo[60244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmodcgqhykbbujzjwchelbvexpjdnzei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712489.8617191-348-75852149068282/AnsiballZ_systemd.py'
Jan 06 15:14:50 compute-0 sudo[60244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:14:50 compute-0 python3.9[60246]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 15:14:50 compute-0 systemd[1]: Reloading Network Manager...
Jan 06 15:14:50 compute-0 NetworkManager[56218]: <info>  [1767712490.6747] audit: op="reload" arg="0" pid=60250 uid=0 result="success"
Jan 06 15:14:50 compute-0 NetworkManager[56218]: <info>  [1767712490.6764] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 06 15:14:50 compute-0 systemd[1]: Reloaded Network Manager.
Jan 06 15:14:50 compute-0 sudo[60244]: pam_unix(sudo:session): session closed for user root
Jan 06 15:14:51 compute-0 sshd-session[52224]: Connection closed by 192.168.122.30 port 37966
Jan 06 15:14:51 compute-0 sshd-session[52221]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:14:51 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 06 15:14:51 compute-0 systemd[1]: session-12.scope: Consumed 59.733s CPU time.
Jan 06 15:14:51 compute-0 systemd-logind[791]: Session 12 logged out. Waiting for processes to exit.
Jan 06 15:14:51 compute-0 systemd-logind[791]: Removed session 12.
Jan 06 15:14:57 compute-0 sshd-session[60281]: Accepted publickey for zuul from 192.168.122.30 port 37778 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 15:14:57 compute-0 systemd-logind[791]: New session 13 of user zuul.
Jan 06 15:14:57 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 06 15:14:57 compute-0 sshd-session[60281]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:14:58 compute-0 python3.9[60434]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:14:59 compute-0 python3.9[60589]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 06 15:15:00 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 06 15:15:01 compute-0 python3.9[60779]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:15:01 compute-0 sshd-session[60284]: Connection closed by 192.168.122.30 port 37778
Jan 06 15:15:01 compute-0 sshd-session[60281]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:15:01 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 06 15:15:01 compute-0 systemd[1]: session-13.scope: Consumed 2.776s CPU time.
Jan 06 15:15:01 compute-0 systemd-logind[791]: Session 13 logged out. Waiting for processes to exit.
Jan 06 15:15:01 compute-0 systemd-logind[791]: Removed session 13.
Jan 06 15:15:07 compute-0 sshd-session[60810]: Accepted publickey for zuul from 192.168.122.30 port 59334 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 15:15:07 compute-0 systemd-logind[791]: New session 14 of user zuul.
Jan 06 15:15:07 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 06 15:15:07 compute-0 sshd-session[60810]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:15:08 compute-0 python3.9[60964]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:15:09 compute-0 python3.9[61118]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:15:10 compute-0 sudo[61272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpluubzbyquexiufygnshwrbspfyqszw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712510.0829046-35-237365672607457/AnsiballZ_setup.py'
Jan 06 15:15:10 compute-0 sudo[61272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:10 compute-0 python3.9[61274]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 06 15:15:11 compute-0 sudo[61272]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:11 compute-0 sudo[61357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oglarxzdqxbehnqhearsjggreqxsxezd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712510.0829046-35-237365672607457/AnsiballZ_dnf.py'
Jan 06 15:15:11 compute-0 sudo[61357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:11 compute-0 python3.9[61359]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 06 15:15:13 compute-0 sudo[61357]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:13 compute-0 sudo[61510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rufsnregskgsqxgquynklkgwkooqouak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712513.3159435-47-171371443521437/AnsiballZ_setup.py'
Jan 06 15:15:13 compute-0 sudo[61510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:14 compute-0 python3.9[61512]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 06 15:15:14 compute-0 sudo[61510]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:15 compute-0 sudo[61701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhizywwjalgverijnzhckjtlsnrclhdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712514.7470758-58-236008583993611/AnsiballZ_file.py'
Jan 06 15:15:15 compute-0 sudo[61701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:15 compute-0 python3.9[61703]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:15:15 compute-0 sudo[61701]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:16 compute-0 sudo[61853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlfmhriyufpgyssyzbykzwkjlvqvpliy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712515.7340567-66-15426095521561/AnsiballZ_command.py'
Jan 06 15:15:16 compute-0 sudo[61853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:16 compute-0 python3.9[61855]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:15:16 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:15:16 compute-0 sudo[61853]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:17 compute-0 sudo[62016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxrtnhzerluzwlctrezkmzvezlewqeje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712516.9171362-74-277708382186635/AnsiballZ_stat.py'
Jan 06 15:15:17 compute-0 sudo[62016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:17 compute-0 python3.9[62018]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:15:17 compute-0 sudo[62016]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:17 compute-0 sudo[62094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxruljlzkdineuqwybmlrlbjoodhsydv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712516.9171362-74-277708382186635/AnsiballZ_file.py'
Jan 06 15:15:17 compute-0 sudo[62094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:18 compute-0 python3.9[62096]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:15:18 compute-0 sudo[62094]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:18 compute-0 sshd-session[60808]: Invalid user NL5xUDpV2xRa from 106.13.75.140 port 54923
Jan 06 15:15:18 compute-0 sshd-session[60808]: fatal: userauth_pubkey: parse publickey packet: incomplete message [preauth]
Jan 06 15:15:18 compute-0 sudo[62246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peelowviygvufynuhqyeerocsvjmqgnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712518.376046-86-205124447162940/AnsiballZ_stat.py'
Jan 06 15:15:18 compute-0 sudo[62246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:18 compute-0 python3.9[62248]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:15:18 compute-0 sudo[62246]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:19 compute-0 sudo[62324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssljfwpqafkuhilnxfxqhkwcgqcaublh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712518.376046-86-205124447162940/AnsiballZ_file.py'
Jan 06 15:15:19 compute-0 sudo[62324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:19 compute-0 python3.9[62326]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:15:19 compute-0 sudo[62324]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:20 compute-0 sudo[62476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quiqpwraolhrpgeppzscsbillidgwdla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712519.696526-99-67377860053709/AnsiballZ_ini_file.py'
Jan 06 15:15:20 compute-0 sudo[62476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:20 compute-0 python3.9[62478]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:15:20 compute-0 sudo[62476]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:21 compute-0 sudo[62628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmoxkniksugkzzeyfrqsefspbkjbyqzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712520.7386947-99-44735840002910/AnsiballZ_ini_file.py'
Jan 06 15:15:21 compute-0 sudo[62628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:21 compute-0 python3.9[62630]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:15:21 compute-0 sudo[62628]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:21 compute-0 sudo[62780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhavaamrvmtanocakrxltfbbozcdrpop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712521.4739146-99-185928352977290/AnsiballZ_ini_file.py'
Jan 06 15:15:21 compute-0 sudo[62780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:22 compute-0 python3.9[62782]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:15:22 compute-0 sudo[62780]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:22 compute-0 sudo[62932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddskazeouodowbjssxyuztffjtbecpiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712522.2514307-99-87730903415306/AnsiballZ_ini_file.py'
Jan 06 15:15:22 compute-0 sudo[62932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:22 compute-0 python3.9[62934]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:15:22 compute-0 sudo[62932]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:23 compute-0 sudo[63084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqetkuqhzrbhvfynbkhotsxqrmktxyap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712523.081239-130-258441104530081/AnsiballZ_dnf.py'
Jan 06 15:15:23 compute-0 sudo[63084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:23 compute-0 python3.9[63086]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 06 15:15:24 compute-0 sudo[63084]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:25 compute-0 sudo[63237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaljuhraehdndrbbkoaseurxirqsgmqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712525.468435-141-148435200601477/AnsiballZ_setup.py'
Jan 06 15:15:25 compute-0 sudo[63237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:26 compute-0 python3.9[63239]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:15:26 compute-0 sudo[63237]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:26 compute-0 sudo[63391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffezakveygogdtxciskdxuoxudesiiel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712526.3187935-149-266247897926764/AnsiballZ_stat.py'
Jan 06 15:15:26 compute-0 sudo[63391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:26 compute-0 python3.9[63393]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:15:26 compute-0 sudo[63391]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:27 compute-0 sudo[63543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otpzfyuzrgvadqroouevztmsnrzcwljd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712527.109907-158-266103178052753/AnsiballZ_stat.py'
Jan 06 15:15:27 compute-0 sudo[63543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:27 compute-0 python3.9[63545]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:15:27 compute-0 sudo[63543]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:28 compute-0 sudo[63695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfyvdzzodryvrwmxuyebhciwerogxtfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712527.9502068-168-11103431205761/AnsiballZ_command.py'
Jan 06 15:15:28 compute-0 sudo[63695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:28 compute-0 python3.9[63697]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:15:28 compute-0 sudo[63695]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:29 compute-0 sudo[63848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwlfpuwiucbwsasmxytkkxkrruitchvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712528.7199101-178-156724032867166/AnsiballZ_service_facts.py'
Jan 06 15:15:29 compute-0 sudo[63848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:29 compute-0 python3.9[63850]: ansible-service_facts Invoked
Jan 06 15:15:29 compute-0 network[63867]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 06 15:15:29 compute-0 network[63868]: 'network-scripts' will be removed from distribution in near future.
Jan 06 15:15:29 compute-0 network[63869]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 06 15:15:32 compute-0 sudo[63848]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:33 compute-0 sudo[64152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgiysxcymkdaroktjyincxykpuhgnwmp ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1767712533.5738237-193-77093228636980/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1767712533.5738237-193-77093228636980/args'
Jan 06 15:15:33 compute-0 sudo[64152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:34 compute-0 sudo[64152]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:34 compute-0 sudo[64319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qliicwmxwqoanftvuhyypwfaqoirhjki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712534.4444363-204-54560133783636/AnsiballZ_dnf.py'
Jan 06 15:15:34 compute-0 sudo[64319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:35 compute-0 python3.9[64321]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 06 15:15:36 compute-0 sudo[64319]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:37 compute-0 sudo[64472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibyvcudjgqhgfemmtkrktpaxiefoflic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712536.8805997-217-158122072933097/AnsiballZ_package_facts.py'
Jan 06 15:15:37 compute-0 sudo[64472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:37 compute-0 python3.9[64474]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 06 15:15:38 compute-0 sudo[64472]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:38 compute-0 sudo[64624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfhghrvgezxarbvgnqtctwlflmbppasm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712538.431097-227-238406203216440/AnsiballZ_stat.py'
Jan 06 15:15:38 compute-0 sudo[64624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:38 compute-0 python3.9[64626]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:15:39 compute-0 sudo[64624]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:39 compute-0 sudo[64749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txpiocnccbewgmxkasqukdhjrdyjaoco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712538.431097-227-238406203216440/AnsiballZ_copy.py'
Jan 06 15:15:39 compute-0 sudo[64749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:39 compute-0 python3.9[64751]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767712538.431097-227-238406203216440/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:15:39 compute-0 sudo[64749]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:40 compute-0 sudo[64903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyvmpfglqnmoijkktbxdqrltggvvqmal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712540.0556312-242-227432736353425/AnsiballZ_stat.py'
Jan 06 15:15:40 compute-0 sudo[64903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:40 compute-0 python3.9[64905]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:15:40 compute-0 sudo[64903]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:41 compute-0 sudo[65028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujgutxafjtquuqrhpacgusoswgkbicrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712540.0556312-242-227432736353425/AnsiballZ_copy.py'
Jan 06 15:15:41 compute-0 sudo[65028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:41 compute-0 python3.9[65030]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767712540.0556312-242-227432736353425/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:15:41 compute-0 sudo[65028]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:42 compute-0 sudo[65182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rherovlvlyiagmrjywfshjjjwrtayztb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712541.8996513-263-222456369280764/AnsiballZ_lineinfile.py'
Jan 06 15:15:42 compute-0 sudo[65182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:42 compute-0 python3.9[65184]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:15:42 compute-0 sudo[65182]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:43 compute-0 sudo[65336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iouavbotqfzmfoldzzsnprhmzmygwkes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712543.3525765-278-64655429654727/AnsiballZ_setup.py'
Jan 06 15:15:43 compute-0 sudo[65336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:44 compute-0 python3.9[65338]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 06 15:15:44 compute-0 sudo[65336]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:44 compute-0 sudo[65420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyswvdgtcqnbdvdaccshyptppsifpnit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712543.3525765-278-64655429654727/AnsiballZ_systemd.py'
Jan 06 15:15:44 compute-0 sudo[65420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:45 compute-0 python3.9[65422]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:15:45 compute-0 sudo[65420]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:46 compute-0 sudo[65574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqyqbirbffhcyehsmpoeiadkhlgzkvxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712545.7875102-294-94307741798627/AnsiballZ_setup.py'
Jan 06 15:15:46 compute-0 sudo[65574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:46 compute-0 python3.9[65576]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 06 15:15:46 compute-0 sudo[65574]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:47 compute-0 sudo[65658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hghizbijyoaedtiudqlyicqeeosjgplt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712545.7875102-294-94307741798627/AnsiballZ_systemd.py'
Jan 06 15:15:47 compute-0 sudo[65658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:47 compute-0 python3.9[65660]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 15:15:47 compute-0 chronyd[794]: chronyd exiting
Jan 06 15:15:47 compute-0 systemd[1]: Stopping NTP client/server...
Jan 06 15:15:47 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 06 15:15:47 compute-0 systemd[1]: Stopped NTP client/server.
Jan 06 15:15:47 compute-0 systemd[1]: Starting NTP client/server...
Jan 06 15:15:47 compute-0 chronyd[65668]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 06 15:15:47 compute-0 chronyd[65668]: Frequency -26.479 +/- 0.248 ppm read from /var/lib/chrony/drift
Jan 06 15:15:47 compute-0 chronyd[65668]: Loaded seccomp filter (level 2)
Jan 06 15:15:47 compute-0 systemd[1]: Started NTP client/server.
Jan 06 15:15:47 compute-0 sudo[65658]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:48 compute-0 sshd-session[60813]: Connection closed by 192.168.122.30 port 59334
Jan 06 15:15:48 compute-0 sshd-session[60810]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:15:48 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 06 15:15:48 compute-0 systemd[1]: session-14.scope: Consumed 29.464s CPU time.
Jan 06 15:15:48 compute-0 systemd-logind[791]: Session 14 logged out. Waiting for processes to exit.
Jan 06 15:15:48 compute-0 systemd-logind[791]: Removed session 14.
Jan 06 15:15:53 compute-0 sshd-session[65694]: Accepted publickey for zuul from 192.168.122.30 port 59414 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 15:15:53 compute-0 systemd-logind[791]: New session 15 of user zuul.
Jan 06 15:15:53 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 06 15:15:53 compute-0 sshd-session[65694]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:15:54 compute-0 python3.9[65847]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:15:56 compute-0 sudo[66001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqmlvtnfmgsmwvhbhvkvudrqxbocxvys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712555.4345198-28-96835398222262/AnsiballZ_file.py'
Jan 06 15:15:56 compute-0 sudo[66001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:56 compute-0 python3.9[66003]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:15:56 compute-0 sudo[66001]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:57 compute-0 sudo[66176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmfukmvknkwjvlnjeczelegmtyzheswy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712556.5775247-36-188766016897119/AnsiballZ_stat.py'
Jan 06 15:15:57 compute-0 sudo[66176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:57 compute-0 python3.9[66178]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:15:57 compute-0 sudo[66176]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:57 compute-0 sudo[66254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwwfptxyszjdchgluihnfuwkxrycfuzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712556.5775247-36-188766016897119/AnsiballZ_file.py'
Jan 06 15:15:57 compute-0 sudo[66254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:57 compute-0 python3.9[66256]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.r4fttx7q recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:15:57 compute-0 sudo[66254]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:58 compute-0 sudo[66406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izxzyjlqiewigzyfyknaichzdnimflef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712558.2562099-56-244950013542460/AnsiballZ_stat.py'
Jan 06 15:15:58 compute-0 sudo[66406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:58 compute-0 python3.9[66408]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:15:58 compute-0 sudo[66406]: pam_unix(sudo:session): session closed for user root
Jan 06 15:15:59 compute-0 sudo[66529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoexitaatrxcwmgutwzzuevcmjjkpdhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712558.2562099-56-244950013542460/AnsiballZ_copy.py'
Jan 06 15:15:59 compute-0 sudo[66529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:15:59 compute-0 python3.9[66531]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767712558.2562099-56-244950013542460/.source _original_basename=.lai1z9hk follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:15:59 compute-0 sudo[66529]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:00 compute-0 sudo[66681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlpbpsyzwgwlzqphjlbqtfcmyhmoebfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712559.7362576-72-106830560349793/AnsiballZ_file.py'
Jan 06 15:16:00 compute-0 sudo[66681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:00 compute-0 python3.9[66683]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:16:00 compute-0 sudo[66681]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:00 compute-0 sudo[66833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efmlouslbxjyhtruplkpfebzddjigcop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712560.58577-80-196397624428264/AnsiballZ_stat.py'
Jan 06 15:16:00 compute-0 sudo[66833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:01 compute-0 python3.9[66835]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:16:01 compute-0 sudo[66833]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:01 compute-0 sudo[66956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfnfkrjyzqyprbdcewavgpaxqlufszvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712560.58577-80-196397624428264/AnsiballZ_copy.py'
Jan 06 15:16:01 compute-0 sudo[66956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:01 compute-0 python3.9[66958]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767712560.58577-80-196397624428264/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:16:01 compute-0 sudo[66956]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:02 compute-0 sudo[67108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyjqeubuiltqneeoncztjfecbjwcxprx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712561.9183414-80-232425170114950/AnsiballZ_stat.py'
Jan 06 15:16:02 compute-0 sudo[67108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:02 compute-0 python3.9[67110]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:16:02 compute-0 sudo[67108]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:02 compute-0 sudo[67231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psdiqjvaodiaevighalrdrqkhanhguyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712561.9183414-80-232425170114950/AnsiballZ_copy.py'
Jan 06 15:16:02 compute-0 sudo[67231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:02 compute-0 python3.9[67233]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767712561.9183414-80-232425170114950/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:16:02 compute-0 sudo[67231]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:03 compute-0 sudo[67383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrotpcborwspzdgwiyvtrbtemlyrgezj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712563.1387117-109-161998808115059/AnsiballZ_file.py'
Jan 06 15:16:03 compute-0 sudo[67383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:03 compute-0 python3.9[67385]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:03 compute-0 sudo[67383]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:04 compute-0 sudo[67535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjooheqkfaqiwkwycrfbtdprsxcyviox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712563.9688184-117-28493318024306/AnsiballZ_stat.py'
Jan 06 15:16:04 compute-0 sudo[67535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:04 compute-0 python3.9[67537]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:16:04 compute-0 sudo[67535]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:05 compute-0 sudo[67658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxtriqxvjhieciumumvslqlujbszyrcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712563.9688184-117-28493318024306/AnsiballZ_copy.py'
Jan 06 15:16:05 compute-0 sudo[67658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:05 compute-0 python3.9[67660]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712563.9688184-117-28493318024306/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:05 compute-0 sudo[67658]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:05 compute-0 sudo[67810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkdhhbdejodixrntjztsplzwqfqzsbge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712565.5048892-132-64015015070561/AnsiballZ_stat.py'
Jan 06 15:16:05 compute-0 sudo[67810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:06 compute-0 python3.9[67812]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:16:06 compute-0 sudo[67810]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:06 compute-0 sudo[67933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gloarfsukkkybevlxjjsfzloaaqeosxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712565.5048892-132-64015015070561/AnsiballZ_copy.py'
Jan 06 15:16:06 compute-0 sudo[67933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:06 compute-0 python3.9[67935]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712565.5048892-132-64015015070561/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:06 compute-0 sudo[67933]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:07 compute-0 sudo[68085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiluylopthphicotpiwydkwcwbubygdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712566.9666834-147-172659870788963/AnsiballZ_systemd.py'
Jan 06 15:16:07 compute-0 sudo[68085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:08 compute-0 python3.9[68087]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:16:08 compute-0 systemd[1]: Reloading.
Jan 06 15:16:08 compute-0 systemd-rc-local-generator[68113]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:16:08 compute-0 systemd-sysv-generator[68116]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:16:08 compute-0 systemd[1]: Reloading.
Jan 06 15:16:08 compute-0 systemd-sysv-generator[68155]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:16:08 compute-0 systemd-rc-local-generator[68150]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:16:08 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 06 15:16:08 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 06 15:16:08 compute-0 sudo[68085]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:09 compute-0 sudo[68312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpkzppjbdikjtarrrhaehgtkoezfgvwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712568.9581132-155-248301485628959/AnsiballZ_stat.py'
Jan 06 15:16:09 compute-0 sudo[68312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:09 compute-0 python3.9[68314]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:16:09 compute-0 sudo[68312]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:10 compute-0 sudo[68435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvqlllyckyupqcnwnqnhbhxjlttgiixh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712568.9581132-155-248301485628959/AnsiballZ_copy.py'
Jan 06 15:16:10 compute-0 sudo[68435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:10 compute-0 python3.9[68437]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712568.9581132-155-248301485628959/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:10 compute-0 sudo[68435]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:10 compute-0 sudo[68587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmxptlshthtbmcjdpqubncvwjpcrmccl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712570.447453-170-875425527055/AnsiballZ_stat.py'
Jan 06 15:16:10 compute-0 sudo[68587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:10 compute-0 python3.9[68589]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:16:11 compute-0 sudo[68587]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:11 compute-0 sudo[68710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flqaxqzogfkdgqkhfjmtsofgceugekwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712570.447453-170-875425527055/AnsiballZ_copy.py'
Jan 06 15:16:11 compute-0 sudo[68710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:11 compute-0 python3.9[68712]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712570.447453-170-875425527055/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:11 compute-0 sudo[68710]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:12 compute-0 sudo[68862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dldfizryxjdbxlcczvfgflswsfumncwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712571.91392-185-46142319912460/AnsiballZ_systemd.py'
Jan 06 15:16:12 compute-0 sudo[68862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:12 compute-0 python3.9[68864]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:16:12 compute-0 systemd[1]: Reloading.
Jan 06 15:16:12 compute-0 systemd-rc-local-generator[68895]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:16:12 compute-0 systemd-sysv-generator[68898]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:16:12 compute-0 systemd[1]: Reloading.
Jan 06 15:16:13 compute-0 systemd-rc-local-generator[68929]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:16:13 compute-0 systemd-sysv-generator[68933]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:16:13 compute-0 systemd[1]: Starting Create netns directory...
Jan 06 15:16:13 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 06 15:16:13 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 06 15:16:13 compute-0 systemd[1]: Finished Create netns directory.
Jan 06 15:16:13 compute-0 sudo[68862]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:14 compute-0 python3.9[69091]: ansible-ansible.builtin.service_facts Invoked
Jan 06 15:16:14 compute-0 network[69108]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 06 15:16:14 compute-0 network[69109]: 'network-scripts' will be removed from distribution in near future.
Jan 06 15:16:14 compute-0 network[69110]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 06 15:16:18 compute-0 sudo[69370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-draqgaemfhzeetutfuoglvradxbhmxvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712578.4766772-201-216027944133834/AnsiballZ_systemd.py'
Jan 06 15:16:18 compute-0 sudo[69370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:19 compute-0 python3.9[69372]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:16:19 compute-0 systemd[1]: Reloading.
Jan 06 15:16:19 compute-0 systemd-rc-local-generator[69399]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:16:19 compute-0 systemd-sysv-generator[69403]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:16:19 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 06 15:16:19 compute-0 iptables.init[69412]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 06 15:16:19 compute-0 iptables.init[69412]: iptables: Flushing firewall rules: [  OK  ]
Jan 06 15:16:19 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 06 15:16:19 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 06 15:16:19 compute-0 sudo[69370]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:20 compute-0 sudo[69606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpdrkffnicqmpfyphicwxxqdfafqvsdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712580.0332572-201-70415944675831/AnsiballZ_systemd.py'
Jan 06 15:16:20 compute-0 sudo[69606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:20 compute-0 python3.9[69608]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:16:20 compute-0 sudo[69606]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:21 compute-0 sudo[69760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvhkvjcqnaxjyjbjspokgxsfgyhtbmmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712581.0019767-217-172441126301905/AnsiballZ_systemd.py'
Jan 06 15:16:21 compute-0 sudo[69760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:21 compute-0 python3.9[69762]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:16:21 compute-0 systemd[1]: Reloading.
Jan 06 15:16:21 compute-0 systemd-rc-local-generator[69792]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:16:21 compute-0 systemd-sysv-generator[69795]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:16:21 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 06 15:16:22 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 06 15:16:22 compute-0 sudo[69760]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:22 compute-0 sudo[69952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smfyvtfmkebpithloanskayyvpuvpghg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712582.19505-225-220082900859487/AnsiballZ_command.py'
Jan 06 15:16:22 compute-0 sudo[69952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:22 compute-0 python3.9[69954]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:16:22 compute-0 sudo[69952]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:23 compute-0 sudo[70105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gudcirncphztxpszeiixygfsbeypkvcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712583.3059359-239-136203737145333/AnsiballZ_stat.py'
Jan 06 15:16:23 compute-0 sudo[70105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:23 compute-0 python3.9[70107]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:16:23 compute-0 sudo[70105]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:24 compute-0 sudo[70230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwfwintcuybqpjngzhdcqjjfivgzjchy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712583.3059359-239-136203737145333/AnsiballZ_copy.py'
Jan 06 15:16:24 compute-0 sudo[70230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:24 compute-0 python3.9[70232]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1767712583.3059359-239-136203737145333/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:24 compute-0 sudo[70230]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:25 compute-0 sudo[70383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zurucvocfjsrqfauzztkfnlizimfrdcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712584.746874-254-246687184682313/AnsiballZ_systemd.py'
Jan 06 15:16:25 compute-0 sudo[70383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:25 compute-0 python3.9[70385]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 15:16:25 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 06 15:16:25 compute-0 sshd[1006]: Received SIGHUP; restarting.
Jan 06 15:16:25 compute-0 sshd[1006]: Server listening on 0.0.0.0 port 22.
Jan 06 15:16:25 compute-0 sshd[1006]: Server listening on :: port 22.
Jan 06 15:16:25 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 06 15:16:25 compute-0 sudo[70383]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:26 compute-0 sudo[70539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygjotxddylsocfijumdkyeipyvilsdvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712585.7684257-262-181967344146810/AnsiballZ_file.py'
Jan 06 15:16:26 compute-0 sudo[70539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:26 compute-0 python3.9[70541]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:26 compute-0 sudo[70539]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:26 compute-0 sudo[70691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzywjoszimhrsxisknaxuyqwupmikshp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712586.4573162-270-184381245259834/AnsiballZ_stat.py'
Jan 06 15:16:26 compute-0 sudo[70691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:27 compute-0 python3.9[70693]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:16:27 compute-0 sudo[70691]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:27 compute-0 sudo[70814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maeordrrjvzfowdueiisqneubgeeunfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712586.4573162-270-184381245259834/AnsiballZ_copy.py'
Jan 06 15:16:27 compute-0 sudo[70814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:27 compute-0 python3.9[70816]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712586.4573162-270-184381245259834/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:27 compute-0 sudo[70814]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:28 compute-0 sudo[70966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhlxkarneosodakyisetjxuexztzffje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712588.1189005-288-19645539563640/AnsiballZ_timezone.py'
Jan 06 15:16:28 compute-0 sudo[70966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:28 compute-0 python3.9[70968]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 06 15:16:28 compute-0 systemd[1]: Starting Time & Date Service...
Jan 06 15:16:28 compute-0 systemd[1]: Started Time & Date Service.
Jan 06 15:16:29 compute-0 sudo[70966]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:29 compute-0 sudo[71122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rperwyamwfvfdcntvmcbnlgqpzroermw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712589.3148947-297-155690860249210/AnsiballZ_file.py'
Jan 06 15:16:29 compute-0 sudo[71122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:29 compute-0 python3.9[71124]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:29 compute-0 sudo[71122]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:30 compute-0 sudo[71274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykuncszigtrtehlcadcwsehsgvhzfslt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712589.9978356-305-24574812169407/AnsiballZ_stat.py'
Jan 06 15:16:30 compute-0 sudo[71274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:30 compute-0 python3.9[71276]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:16:30 compute-0 sudo[71274]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:30 compute-0 sudo[71397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srspniaryehvxqvculloomwmamorvfyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712589.9978356-305-24574812169407/AnsiballZ_copy.py'
Jan 06 15:16:30 compute-0 sudo[71397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:31 compute-0 python3.9[71399]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767712589.9978356-305-24574812169407/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:31 compute-0 sudo[71397]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:31 compute-0 sudo[71549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laeiwnmhihfpahbrswfglfrcixhikois ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712591.3840468-320-36641916161841/AnsiballZ_stat.py'
Jan 06 15:16:31 compute-0 sudo[71549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:31 compute-0 python3.9[71551]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:16:31 compute-0 sudo[71549]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:32 compute-0 sudo[71672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uncvsseamabpnrsgkqlfgigbjsqdcimm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712591.3840468-320-36641916161841/AnsiballZ_copy.py'
Jan 06 15:16:32 compute-0 sudo[71672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:32 compute-0 python3.9[71674]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767712591.3840468-320-36641916161841/.source.yaml _original_basename=.jvzfb_7d follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:32 compute-0 sudo[71672]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:33 compute-0 sudo[71824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfglmiahbkrqbwkfwtihkvzyjyrzuepe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712592.6566727-335-170594801005165/AnsiballZ_stat.py'
Jan 06 15:16:33 compute-0 sudo[71824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:33 compute-0 python3.9[71826]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:16:33 compute-0 sudo[71824]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:33 compute-0 sudo[71947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtyaotaerpjfbngipkvbgrwcflkhpuji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712592.6566727-335-170594801005165/AnsiballZ_copy.py'
Jan 06 15:16:33 compute-0 sudo[71947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:33 compute-0 python3.9[71949]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712592.6566727-335-170594801005165/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:33 compute-0 sudo[71947]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:34 compute-0 sudo[72099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sniqueiyoimlxgjjwdnluulqiiavyvbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712594.12776-350-113609571205862/AnsiballZ_command.py'
Jan 06 15:16:34 compute-0 sudo[72099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:34 compute-0 python3.9[72101]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:16:34 compute-0 sudo[72099]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:35 compute-0 sudo[72252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhckeqjyvopuceschniiekybrmptiwop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712594.90933-358-19983079860929/AnsiballZ_command.py'
Jan 06 15:16:35 compute-0 sudo[72252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:35 compute-0 python3.9[72254]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:16:35 compute-0 sudo[72252]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:36 compute-0 sudo[72405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xejrezazobrcwukmborvawtgbkjkvlge ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767712595.7083025-366-250108463675488/AnsiballZ_edpm_nftables_from_files.py'
Jan 06 15:16:36 compute-0 sudo[72405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:36 compute-0 python3[72407]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 06 15:16:36 compute-0 sudo[72405]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:37 compute-0 sudo[72557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgtqjszkqxenmlzikbfhrfnzsncwbimw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712596.720115-374-216127436572324/AnsiballZ_stat.py'
Jan 06 15:16:37 compute-0 sudo[72557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:37 compute-0 python3.9[72559]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:16:37 compute-0 sudo[72557]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:37 compute-0 sudo[72680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjmzszllxsbonewrpffceioxyhgutkeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712596.720115-374-216127436572324/AnsiballZ_copy.py'
Jan 06 15:16:37 compute-0 sudo[72680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:38 compute-0 python3.9[72682]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712596.720115-374-216127436572324/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:38 compute-0 sudo[72680]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:38 compute-0 sudo[72832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcxygrgyueuwtvdyrzdonpwqkulbnedu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712598.2800035-389-111414375792894/AnsiballZ_stat.py'
Jan 06 15:16:38 compute-0 sudo[72832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:38 compute-0 python3.9[72834]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:16:38 compute-0 sudo[72832]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:39 compute-0 sudo[72955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltsdefcyxxgcyruaptncdmxfuxljxoet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712598.2800035-389-111414375792894/AnsiballZ_copy.py'
Jan 06 15:16:39 compute-0 sudo[72955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:39 compute-0 python3.9[72957]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712598.2800035-389-111414375792894/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:39 compute-0 sudo[72955]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:39 compute-0 sudo[73107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhkmvfgdlcajxeqlafqrfbbrtrzjignp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712599.6510217-404-153198793178939/AnsiballZ_stat.py'
Jan 06 15:16:39 compute-0 sudo[73107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:40 compute-0 python3.9[73109]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:16:40 compute-0 sudo[73107]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:40 compute-0 sudo[73230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alagjcegkgxmkkquseeplmypldwalspm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712599.6510217-404-153198793178939/AnsiballZ_copy.py'
Jan 06 15:16:40 compute-0 sudo[73230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:40 compute-0 python3.9[73232]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712599.6510217-404-153198793178939/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:40 compute-0 sudo[73230]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:41 compute-0 sudo[73382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llxnhvqmyhhpbqstpukmmblmotthakcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712601.0263329-419-197018568170874/AnsiballZ_stat.py'
Jan 06 15:16:41 compute-0 sudo[73382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:41 compute-0 python3.9[73384]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:16:41 compute-0 sudo[73382]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:42 compute-0 sudo[73505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxmqrnrzsyazewhzuiwvipxzbkunkcya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712601.0263329-419-197018568170874/AnsiballZ_copy.py'
Jan 06 15:16:42 compute-0 sudo[73505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:42 compute-0 python3.9[73507]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712601.0263329-419-197018568170874/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:42 compute-0 sudo[73505]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:42 compute-0 sudo[73657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecaheyfvzwahadewblphcskkzlowyxba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712602.5516279-434-138483466723734/AnsiballZ_stat.py'
Jan 06 15:16:42 compute-0 sudo[73657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:43 compute-0 python3.9[73659]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:16:43 compute-0 sudo[73657]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:43 compute-0 sudo[73780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqwhwbgsaebualsvpaljbfbkrhxmjell ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712602.5516279-434-138483466723734/AnsiballZ_copy.py'
Jan 06 15:16:43 compute-0 sudo[73780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:43 compute-0 python3.9[73782]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712602.5516279-434-138483466723734/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:43 compute-0 sudo[73780]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:44 compute-0 sudo[73932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdwljdapbkgeeaznglxxzxinbhtvwrcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712604.1231103-449-208111513965191/AnsiballZ_file.py'
Jan 06 15:16:44 compute-0 sudo[73932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:44 compute-0 python3.9[73934]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:44 compute-0 sudo[73932]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:45 compute-0 sudo[74084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gquihabfyaxvyzamyswtbnsvfosklxyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712604.8843129-457-240185302031517/AnsiballZ_command.py'
Jan 06 15:16:45 compute-0 sudo[74084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:45 compute-0 python3.9[74086]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:16:45 compute-0 sudo[74084]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:46 compute-0 sudo[74243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyhslrieiwlnuxleryxtpjeyftbumlie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712605.616453-465-196286215245103/AnsiballZ_blockinfile.py'
Jan 06 15:16:46 compute-0 sudo[74243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:46 compute-0 python3.9[74245]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:46 compute-0 sudo[74243]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:46 compute-0 sudo[74396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cablpmkndzwanzgrhvdiobfzpdbjpimr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712606.526663-474-249819786162900/AnsiballZ_file.py'
Jan 06 15:16:46 compute-0 sudo[74396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:47 compute-0 python3.9[74398]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:47 compute-0 sudo[74396]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:47 compute-0 sudo[74548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sesqmgqjqiwcbqfgopjfhrzicwplxrgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712607.1967595-474-144825250204458/AnsiballZ_file.py'
Jan 06 15:16:47 compute-0 sudo[74548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:47 compute-0 python3.9[74550]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:47 compute-0 sudo[74548]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:48 compute-0 sudo[74700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufxuirecupxygutztnzvupupgehncodn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712607.8699467-489-74101040608407/AnsiballZ_mount.py'
Jan 06 15:16:48 compute-0 sudo[74700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:48 compute-0 python3.9[74702]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 06 15:16:48 compute-0 sudo[74700]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:48 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 06 15:16:48 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 06 15:16:49 compute-0 sudo[74854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaxutwmgxyueakydjjklrkpmqxhbcgfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712608.7812562-489-52864087146932/AnsiballZ_mount.py'
Jan 06 15:16:49 compute-0 sudo[74854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:49 compute-0 python3.9[74856]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 06 15:16:49 compute-0 sudo[74854]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:49 compute-0 sshd-session[65697]: Connection closed by 192.168.122.30 port 59414
Jan 06 15:16:49 compute-0 sshd-session[65694]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:16:49 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 06 15:16:49 compute-0 systemd[1]: session-15.scope: Consumed 41.480s CPU time.
Jan 06 15:16:49 compute-0 systemd-logind[791]: Session 15 logged out. Waiting for processes to exit.
Jan 06 15:16:49 compute-0 systemd-logind[791]: Removed session 15.
Jan 06 15:16:55 compute-0 sshd-session[74882]: Accepted publickey for zuul from 192.168.122.30 port 48694 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 15:16:55 compute-0 systemd-logind[791]: New session 16 of user zuul.
Jan 06 15:16:55 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 06 15:16:55 compute-0 sshd-session[74882]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:16:56 compute-0 sudo[75035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdyzuamtzbkghejpbbstjrgjcekpaizt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712615.6873584-16-102266647832592/AnsiballZ_tempfile.py'
Jan 06 15:16:56 compute-0 sudo[75035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:56 compute-0 python3.9[75037]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 06 15:16:56 compute-0 sudo[75035]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:57 compute-0 sudo[75187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxepkyikxnmncyrmixqgvtoximyzuihk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712616.7406733-28-139128719763510/AnsiballZ_stat.py'
Jan 06 15:16:57 compute-0 sudo[75187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:57 compute-0 python3.9[75189]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:16:57 compute-0 sudo[75187]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:58 compute-0 sudo[75339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qluoxltknlrpwwayaytfujmrtkeucibx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712617.6719267-38-106419183563249/AnsiballZ_setup.py'
Jan 06 15:16:58 compute-0 sudo[75339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:58 compute-0 python3.9[75341]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:16:58 compute-0 sudo[75339]: pam_unix(sudo:session): session closed for user root
Jan 06 15:16:59 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 06 15:16:59 compute-0 sudo[75494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfqummyczzavtxlwofxuvfjjjnkvlwuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712618.9116876-47-224283031462059/AnsiballZ_blockinfile.py'
Jan 06 15:16:59 compute-0 sudo[75494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:16:59 compute-0 python3.9[75496]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJIP4Zijo+vou/VpSavA53uwEvxdmAeqw7LuIpUHSQ90EllIxibOhyK/ah8jeN+DmZJ8iyfEuHDbeXfA6BAfrRV6qKNnZXmmB8DKn64Z4bl2zBxbgfLVQCccLXFj4wAwMpivTQtOWIZ4kFgsPPE8sncVsd0I4cRmRLvSk3YwtEC/cIgk9WPH8mzFZNzRZXZjDkxNOVeNG6kRJeYhdkvnUXLo5QFXsuHpV0muRyzCPK6VL3Zi4HxmrxEot0KVbYcFXBDvrUo0wo62PMhS45F66IYJCtdTxiJG8cXDxoLdFYx/2JVXgUZfqSLBR9R5GVYGj1SmDUZ1/MHM5y9DBaV5trklR4byHqqVrZpAPWhJGhWMm5MYkNgIJvz9XjV6vtUl4xXz6cku17WOu9hqCy2YXHAJM2yMMDCUvEdHJjxg3rrgPPHeHNDKPKXbPuV+lJNjIxkoQe6BpuHkTZFtdyOYNqTL2Plx5fKoM5gJoWyUYruW9znV/7ClQLM69W8VLLsTs=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHQXRsK1JeNg661qL3dWfNlU/oY17nfu5h8loZpU0/eJ
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCDHrNksCPFPrPPMaUll71N1xR1xDqXRW+y+da0Twhrry14cRD+lDLhDIJgTCtQf9C5SpFuxd2ZRYwu9keNF3rQ=
                                             create=True mode=0644 path=/tmp/ansible.oxj2vtll state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:16:59 compute-0 sudo[75494]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:00 compute-0 sudo[75646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmvvlircnqwiimwdxkalwnpivjnhqeei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712619.7459872-55-237552185495079/AnsiballZ_command.py'
Jan 06 15:17:00 compute-0 sudo[75646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:00 compute-0 python3.9[75648]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.oxj2vtll' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:17:00 compute-0 sudo[75646]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:01 compute-0 sudo[75800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atguzimcrzshkaprctllvvqoimgrukzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712620.6811109-63-86160293982104/AnsiballZ_file.py'
Jan 06 15:17:01 compute-0 sudo[75800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:01 compute-0 python3.9[75802]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.oxj2vtll state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:17:01 compute-0 sudo[75800]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:01 compute-0 sshd-session[74885]: Connection closed by 192.168.122.30 port 48694
Jan 06 15:17:01 compute-0 sshd-session[74882]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:17:01 compute-0 systemd-logind[791]: Session 16 logged out. Waiting for processes to exit.
Jan 06 15:17:01 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 06 15:17:01 compute-0 systemd[1]: session-16.scope: Consumed 4.046s CPU time.
Jan 06 15:17:01 compute-0 systemd-logind[791]: Removed session 16.
Jan 06 15:17:07 compute-0 sshd-session[75827]: Accepted publickey for zuul from 192.168.122.30 port 33632 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 15:17:07 compute-0 systemd-logind[791]: New session 17 of user zuul.
Jan 06 15:17:07 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 06 15:17:07 compute-0 sshd-session[75827]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:17:08 compute-0 python3.9[75980]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:17:09 compute-0 sudo[76134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmxgiufzstewlervoepuyvfcesqilmha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712629.226202-27-236050905553319/AnsiballZ_systemd.py'
Jan 06 15:17:09 compute-0 sudo[76134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:10 compute-0 python3.9[76136]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 06 15:17:10 compute-0 sudo[76134]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:10 compute-0 sudo[76288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uklegmigsaypbnusnnsdlnvvepzrcivt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712630.4725816-35-256744027174495/AnsiballZ_systemd.py'
Jan 06 15:17:10 compute-0 sudo[76288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:11 compute-0 python3.9[76290]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 15:17:11 compute-0 sudo[76288]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:12 compute-0 sudo[76441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nucmblkalvpuvsavgjjkhaqvztisibel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712631.4540293-44-218448017491233/AnsiballZ_command.py'
Jan 06 15:17:12 compute-0 sudo[76441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:12 compute-0 python3.9[76443]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:17:12 compute-0 sudo[76441]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:12 compute-0 sudo[76594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwkxnyomqfjkrcnyqqiklseeoqgyjsex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712632.4931924-52-163786029408748/AnsiballZ_stat.py'
Jan 06 15:17:12 compute-0 sudo[76594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:13 compute-0 python3.9[76596]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:17:13 compute-0 sudo[76594]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:13 compute-0 sudo[76748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnfqxqdbudgbwgeibwxcfcytnimfopmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712633.3162913-60-212539257949486/AnsiballZ_command.py'
Jan 06 15:17:13 compute-0 sudo[76748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:13 compute-0 python3.9[76750]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:17:13 compute-0 sudo[76748]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:14 compute-0 sudo[76903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hexkxquciirfniulwwxruvljaavtpnhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712634.186684-68-223404668887726/AnsiballZ_file.py'
Jan 06 15:17:14 compute-0 sudo[76903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:14 compute-0 python3.9[76905]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:17:14 compute-0 sudo[76903]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:15 compute-0 sshd-session[75830]: Connection closed by 192.168.122.30 port 33632
Jan 06 15:17:15 compute-0 sshd-session[75827]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:17:15 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 06 15:17:15 compute-0 systemd[1]: session-17.scope: Consumed 5.378s CPU time.
Jan 06 15:17:15 compute-0 systemd-logind[791]: Session 17 logged out. Waiting for processes to exit.
Jan 06 15:17:15 compute-0 systemd-logind[791]: Removed session 17.
Jan 06 15:17:20 compute-0 sshd-session[76931]: Accepted publickey for zuul from 192.168.122.30 port 48266 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 15:17:20 compute-0 systemd-logind[791]: New session 18 of user zuul.
Jan 06 15:17:20 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 06 15:17:20 compute-0 sshd-session[76931]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:17:22 compute-0 python3.9[77084]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:17:22 compute-0 sudo[77238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljswgjbixeyehnbdhsujcdqrwuqmomfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712642.5968688-29-4888452539049/AnsiballZ_setup.py'
Jan 06 15:17:22 compute-0 sudo[77238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:23 compute-0 python3.9[77240]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 06 15:17:23 compute-0 sudo[77238]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:24 compute-0 sudo[77322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utkfmwadnuqpuvgxikvgpkjfuwewwdih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712642.5968688-29-4888452539049/AnsiballZ_dnf.py'
Jan 06 15:17:24 compute-0 sudo[77322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:24 compute-0 python3.9[77324]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 06 15:17:25 compute-0 sudo[77322]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:26 compute-0 python3.9[77475]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:17:28 compute-0 python3.9[77626]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 06 15:17:29 compute-0 python3.9[77776]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:17:29 compute-0 python3.9[77926]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:17:30 compute-0 sshd-session[76934]: Connection closed by 192.168.122.30 port 48266
Jan 06 15:17:30 compute-0 sshd-session[76931]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:17:30 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 06 15:17:30 compute-0 systemd[1]: session-18.scope: Consumed 6.644s CPU time.
Jan 06 15:17:30 compute-0 systemd-logind[791]: Session 18 logged out. Waiting for processes to exit.
Jan 06 15:17:30 compute-0 systemd-logind[791]: Removed session 18.
Jan 06 15:17:36 compute-0 sshd-session[77951]: Accepted publickey for zuul from 192.168.122.30 port 58040 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 15:17:36 compute-0 systemd-logind[791]: New session 19 of user zuul.
Jan 06 15:17:36 compute-0 systemd[1]: Started Session 19 of User zuul.
Jan 06 15:17:36 compute-0 sshd-session[77951]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:17:37 compute-0 python3.9[78104]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:17:39 compute-0 sudo[78258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knasgbselkwctiwflsuvfnuxxpqlyyoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712658.626358-45-110506474903694/AnsiballZ_file.py'
Jan 06 15:17:39 compute-0 sudo[78258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:39 compute-0 python3.9[78260]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:17:39 compute-0 sudo[78258]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:39 compute-0 sudo[78410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuzezhweynkaewzxdmobhyqydvpbvueo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712659.4886234-45-24137076110029/AnsiballZ_file.py'
Jan 06 15:17:39 compute-0 sudo[78410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:40 compute-0 python3.9[78412]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:17:40 compute-0 sudo[78410]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:40 compute-0 sudo[78562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzpopftvcezawmropswttykffjjrkryu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712660.2692702-60-138407375322024/AnsiballZ_stat.py'
Jan 06 15:17:40 compute-0 sudo[78562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:41 compute-0 python3.9[78564]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:17:41 compute-0 sudo[78562]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:41 compute-0 sudo[78685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnsesrauyordjdhirijbfwlexcpwfopi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712660.2692702-60-138407375322024/AnsiballZ_copy.py'
Jan 06 15:17:41 compute-0 sudo[78685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:41 compute-0 python3.9[78687]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712660.2692702-60-138407375322024/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=09aaa516dfa29bca19a40eb98cd667adc3da5da5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:17:41 compute-0 sudo[78685]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:42 compute-0 sudo[78837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgsubwmdarijfeeqdkqhfowtzgmpnmwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712662.1684763-60-53397634707389/AnsiballZ_stat.py'
Jan 06 15:17:42 compute-0 sudo[78837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:42 compute-0 python3.9[78839]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:17:42 compute-0 sudo[78837]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:43 compute-0 sudo[78960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gznlqydtkxvmtmebttsxfgagqyozqgqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712662.1684763-60-53397634707389/AnsiballZ_copy.py'
Jan 06 15:17:43 compute-0 sudo[78960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:43 compute-0 python3.9[78962]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712662.1684763-60-53397634707389/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=01b38b38b5b6e5beac7d749487a68b0b25132f99 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:17:43 compute-0 sudo[78960]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:43 compute-0 sudo[79112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apppzsxlrnxzoefqcrrysvjvbcyhpwrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712663.6134946-60-161169509993780/AnsiballZ_stat.py'
Jan 06 15:17:43 compute-0 sudo[79112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:44 compute-0 python3.9[79114]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:17:44 compute-0 sudo[79112]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:44 compute-0 sudo[79235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwbhmppdemvbtpmcucuoobnorckhynsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712663.6134946-60-161169509993780/AnsiballZ_copy.py'
Jan 06 15:17:44 compute-0 sudo[79235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:44 compute-0 python3.9[79237]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712663.6134946-60-161169509993780/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=f5e866394bfdd16131a28d4f088f4c7480f04963 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:17:44 compute-0 sudo[79235]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:45 compute-0 sudo[79387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slgohaxqfpsuunkhzpsdstwiqkhfjthc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712665.141396-104-187329659029659/AnsiballZ_file.py'
Jan 06 15:17:45 compute-0 sudo[79387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:45 compute-0 python3.9[79389]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:17:45 compute-0 sudo[79387]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:46 compute-0 sudo[79539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgoctakbgvbyzulmsumgajgxoecihemq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712665.877413-104-58486139960961/AnsiballZ_file.py'
Jan 06 15:17:46 compute-0 sudo[79539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:46 compute-0 python3.9[79541]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:17:46 compute-0 sudo[79539]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:46 compute-0 sudo[79691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oufvwrnetybqxxdytrxqbmfyjpvngrtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712666.6446064-119-93644059219373/AnsiballZ_stat.py'
Jan 06 15:17:46 compute-0 sudo[79691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:47 compute-0 python3.9[79693]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:17:47 compute-0 sudo[79691]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:47 compute-0 sudo[79814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qexdbxtrnvmrnwowlbwmyrcohrurwsrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712666.6446064-119-93644059219373/AnsiballZ_copy.py'
Jan 06 15:17:47 compute-0 sudo[79814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:47 compute-0 python3.9[79816]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712666.6446064-119-93644059219373/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=cadf19a0f8c2b2a79923741aa1945fc27ebeb195 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:17:47 compute-0 sudo[79814]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:48 compute-0 sudo[79966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjtistpontwiqtqdzaiklkejncbqygto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712668.0295224-119-104927746508217/AnsiballZ_stat.py'
Jan 06 15:17:48 compute-0 sudo[79966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:48 compute-0 python3.9[79968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:17:48 compute-0 sudo[79966]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:49 compute-0 sudo[80089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tontsxsfxrwpjfjsjcbwkkrkcygglxdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712668.0295224-119-104927746508217/AnsiballZ_copy.py'
Jan 06 15:17:49 compute-0 sudo[80089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:49 compute-0 python3.9[80091]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712668.0295224-119-104927746508217/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=01b38b38b5b6e5beac7d749487a68b0b25132f99 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:17:49 compute-0 sudo[80089]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:49 compute-0 sudo[80241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnksnxigctkopyseqgaqpywdkacnzpes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712669.5053747-119-172764182227137/AnsiballZ_stat.py'
Jan 06 15:17:49 compute-0 sudo[80241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:50 compute-0 python3.9[80243]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:17:50 compute-0 sudo[80241]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:50 compute-0 sudo[80364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otlgzlwvlmkrtbgzvevoysickfkakkgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712669.5053747-119-172764182227137/AnsiballZ_copy.py'
Jan 06 15:17:50 compute-0 sudo[80364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:50 compute-0 python3.9[80366]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712669.5053747-119-172764182227137/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ab2c73c794ce939997959e84b7c6bafb49154165 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:17:50 compute-0 sudo[80364]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:51 compute-0 sudo[80516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfrmzqfsnecctpoijuntuprbfbfdfxgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712671.081672-163-123031783219886/AnsiballZ_file.py'
Jan 06 15:17:51 compute-0 sudo[80516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:51 compute-0 python3.9[80518]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:17:51 compute-0 sudo[80516]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:52 compute-0 sudo[80668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypcizqcjaspqtolwadrbkwusrpaazxns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712671.8458824-163-258916847878732/AnsiballZ_file.py'
Jan 06 15:17:52 compute-0 sudo[80668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:52 compute-0 python3.9[80670]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:17:52 compute-0 sudo[80668]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:53 compute-0 sudo[80820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqftfqydfegecoegrkmgmirgkehncykz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712672.6714287-178-4920069183400/AnsiballZ_stat.py'
Jan 06 15:17:53 compute-0 sudo[80820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:53 compute-0 python3.9[80822]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:17:53 compute-0 sudo[80820]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:53 compute-0 sudo[80943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxbidwytcttvhmxfmbazeshfmizwciml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712672.6714287-178-4920069183400/AnsiballZ_copy.py'
Jan 06 15:17:53 compute-0 sudo[80943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:53 compute-0 python3.9[80945]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712672.6714287-178-4920069183400/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=c42e6bfeb93ff105b93f7c2d9329daab02f3c43b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:17:53 compute-0 sudo[80943]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:54 compute-0 sudo[81095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anzeatklsubjqguxhoytnxupuvprxwzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712673.9164116-178-177929848077151/AnsiballZ_stat.py'
Jan 06 15:17:54 compute-0 sudo[81095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:54 compute-0 python3.9[81097]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:17:54 compute-0 sudo[81095]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:54 compute-0 sudo[81218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nverehxsjdrbmyxljbtmxozvnhpjgruu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712673.9164116-178-177929848077151/AnsiballZ_copy.py'
Jan 06 15:17:54 compute-0 sudo[81218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:55 compute-0 python3.9[81220]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712673.9164116-178-177929848077151/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3c3dcac4a774afd3ec356ce11a938e5b970cbb16 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:17:55 compute-0 sudo[81218]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:55 compute-0 sudo[81370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfqjrppttaztsrvmqqejdcahavscurmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712675.4035308-178-95516197289744/AnsiballZ_stat.py'
Jan 06 15:17:55 compute-0 sudo[81370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:55 compute-0 python3.9[81372]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:17:55 compute-0 sudo[81370]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:56 compute-0 sudo[81493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcpsidtcjckaqbryxalrmtnhvjzyiwkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712675.4035308-178-95516197289744/AnsiballZ_copy.py'
Jan 06 15:17:56 compute-0 sudo[81493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:56 compute-0 python3.9[81495]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712675.4035308-178-95516197289744/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=209d5c880a4d452b64066b31303782c7180b0b09 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:17:56 compute-0 sudo[81493]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:57 compute-0 sudo[81645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryzhntijtsptiydfedyewqdcnhgeuagq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712676.7060797-222-216302227213148/AnsiballZ_file.py'
Jan 06 15:17:57 compute-0 sudo[81645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:57 compute-0 python3.9[81647]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:17:57 compute-0 sudo[81645]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:57 compute-0 sudo[81797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbyjtjplaotsmkufdfqhmmxusmvkhzwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712677.3740716-222-186896378593206/AnsiballZ_file.py'
Jan 06 15:17:57 compute-0 sudo[81797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:57 compute-0 chronyd[65668]: Selected source 216.197.156.83 (pool.ntp.org)
Jan 06 15:17:57 compute-0 python3.9[81799]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:17:57 compute-0 sudo[81797]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:58 compute-0 sudo[81949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgeicncnafibkklvdlbbufzzdshszfaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712678.1488414-237-115212241060358/AnsiballZ_stat.py'
Jan 06 15:17:58 compute-0 sudo[81949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:58 compute-0 python3.9[81951]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:17:58 compute-0 sudo[81949]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:58 compute-0 sudo[82072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcoqepdopwwnlzxnjsprjdydlsdcrwpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712678.1488414-237-115212241060358/AnsiballZ_copy.py'
Jan 06 15:17:58 compute-0 sudo[82072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:59 compute-0 python3.9[82074]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712678.1488414-237-115212241060358/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=e680d474a6b06d7e02b6b8276289f1a30dc8737d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:17:59 compute-0 sudo[82072]: pam_unix(sudo:session): session closed for user root
Jan 06 15:17:59 compute-0 sudo[82224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfyfkuiemjxmzicmrdpavobpwhloyyya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712679.397663-237-251272182095606/AnsiballZ_stat.py'
Jan 06 15:17:59 compute-0 sudo[82224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:17:59 compute-0 python3.9[82226]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:17:59 compute-0 sudo[82224]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:00 compute-0 sudo[82347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpyeaqarmobeiwezjpirtpnnityvjite ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712679.397663-237-251272182095606/AnsiballZ_copy.py'
Jan 06 15:18:00 compute-0 sudo[82347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:00 compute-0 python3.9[82349]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712679.397663-237-251272182095606/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=803de7598220d711f2f283b09dc8586f7710b7b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:00 compute-0 sudo[82347]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:00 compute-0 sudo[82499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcooypdtmwlkbnoqhgnlgjvooltzwzea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712680.6289349-237-111818319913982/AnsiballZ_stat.py'
Jan 06 15:18:00 compute-0 sudo[82499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:01 compute-0 python3.9[82501]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:18:01 compute-0 sudo[82499]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:01 compute-0 sudo[82622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkvdqewktyxgdninrqfsdihbulktnepf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712680.6289349-237-111818319913982/AnsiballZ_copy.py'
Jan 06 15:18:01 compute-0 sudo[82622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:01 compute-0 python3.9[82624]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712680.6289349-237-111818319913982/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=875757785ee06077c33eedda2db353b82320667e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:01 compute-0 sudo[82622]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:02 compute-0 sudo[82774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkdxofqeqdlfbdjvdoeclmklsilginqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712682.0399914-281-172001256881714/AnsiballZ_file.py'
Jan 06 15:18:02 compute-0 sudo[82774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:02 compute-0 python3.9[82776]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:18:02 compute-0 sudo[82774]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:03 compute-0 sudo[82926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksnueslpocqmpyidpvsedwjjzwcrbpww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712682.7847085-281-111711628815749/AnsiballZ_file.py'
Jan 06 15:18:03 compute-0 sudo[82926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:03 compute-0 python3.9[82928]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:18:03 compute-0 sudo[82926]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:03 compute-0 sudo[83078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufoguyqttioulgejqwllpiyyhvmgseqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712683.4721313-296-193309174375099/AnsiballZ_stat.py'
Jan 06 15:18:03 compute-0 sudo[83078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:03 compute-0 python3.9[83080]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:18:03 compute-0 sudo[83078]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:04 compute-0 sudo[83201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rafsclbcybcqiwhpsynvizgblnjgdqcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712683.4721313-296-193309174375099/AnsiballZ_copy.py'
Jan 06 15:18:04 compute-0 sudo[83201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:04 compute-0 python3.9[83203]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712683.4721313-296-193309174375099/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f4508ad4c2fabfd4f189dae5ee0fc18cec1d4027 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:04 compute-0 sudo[83201]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:05 compute-0 sudo[83353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcdsrbyjnbvlyyvvarsyrgddxjfnocvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712684.813216-296-93522783322942/AnsiballZ_stat.py'
Jan 06 15:18:05 compute-0 sudo[83353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:05 compute-0 python3.9[83355]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:18:05 compute-0 sudo[83353]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:05 compute-0 sudo[83476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhcfzkusebxsklfcgojexahumzpqpbin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712684.813216-296-93522783322942/AnsiballZ_copy.py'
Jan 06 15:18:05 compute-0 sudo[83476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:06 compute-0 python3.9[83478]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712684.813216-296-93522783322942/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=3c3dcac4a774afd3ec356ce11a938e5b970cbb16 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:06 compute-0 sudo[83476]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:06 compute-0 sudo[83628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtrdkjjzrwuhfquuozecqsgoheyzhrdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712686.3027728-296-102831568293038/AnsiballZ_stat.py'
Jan 06 15:18:06 compute-0 sudo[83628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:06 compute-0 python3.9[83630]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:18:06 compute-0 sudo[83628]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:07 compute-0 sudo[83751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbshplwavavgqedjcerpgzwkvgbgcmfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712686.3027728-296-102831568293038/AnsiballZ_copy.py'
Jan 06 15:18:07 compute-0 sudo[83751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:07 compute-0 python3.9[83753]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712686.3027728-296-102831568293038/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=36449046ca495719bf3d985b89cec9e49f4b0366 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:07 compute-0 sudo[83751]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:08 compute-0 sudo[83903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzyydphkkyyuaolvsnrqyigcahfkpwxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712688.3962634-356-103275101852973/AnsiballZ_file.py'
Jan 06 15:18:08 compute-0 sudo[83903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:08 compute-0 python3.9[83905]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:18:09 compute-0 sudo[83903]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:09 compute-0 sudo[84055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpqdqjvjbqqohexpjjahdxitwdxzvnax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712689.2195876-364-254284620983890/AnsiballZ_stat.py'
Jan 06 15:18:09 compute-0 sudo[84055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:09 compute-0 python3.9[84057]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:18:09 compute-0 sudo[84055]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:10 compute-0 sudo[84178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hubeuyiurbpzrzhvngtbnxemoekurbyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712689.2195876-364-254284620983890/AnsiballZ_copy.py'
Jan 06 15:18:10 compute-0 sudo[84178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:10 compute-0 python3.9[84180]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712689.2195876-364-254284620983890/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b3b451b437c2b09b799acb3e061225350970588a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:10 compute-0 sudo[84178]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:11 compute-0 sudo[84330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcyghjjrxcuvcvpidolsfqpfzlgumvnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712690.7669716-380-113332228649233/AnsiballZ_file.py'
Jan 06 15:18:11 compute-0 sudo[84330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:11 compute-0 python3.9[84332]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:18:11 compute-0 sudo[84330]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:11 compute-0 sudo[84482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afdnmceczzuwoubomohovvammomscian ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712691.5193322-388-275645672606944/AnsiballZ_stat.py'
Jan 06 15:18:11 compute-0 sudo[84482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:12 compute-0 python3.9[84484]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:18:12 compute-0 sudo[84482]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:12 compute-0 sudo[84605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avixjsdixuobcmkhbnnucvzrggfypqgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712691.5193322-388-275645672606944/AnsiballZ_copy.py'
Jan 06 15:18:12 compute-0 sudo[84605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:12 compute-0 python3.9[84607]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712691.5193322-388-275645672606944/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b3b451b437c2b09b799acb3e061225350970588a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:12 compute-0 sudo[84605]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:13 compute-0 sudo[84757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prdeivlgfljxfupavgwdzcglegkdgnjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712693.1573856-404-272684119656345/AnsiballZ_file.py'
Jan 06 15:18:13 compute-0 sudo[84757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:13 compute-0 python3.9[84759]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:18:13 compute-0 sudo[84757]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:14 compute-0 sudo[84909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwbbxoymukaweirkclkebqrpudhobpfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712693.9459922-412-191147134625444/AnsiballZ_stat.py'
Jan 06 15:18:14 compute-0 sudo[84909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:14 compute-0 python3.9[84911]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:18:14 compute-0 sudo[84909]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:15 compute-0 sudo[85032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roadhsyfjhobelwudpqhuszkqzumrzrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712693.9459922-412-191147134625444/AnsiballZ_copy.py'
Jan 06 15:18:15 compute-0 sudo[85032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:15 compute-0 python3.9[85034]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712693.9459922-412-191147134625444/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b3b451b437c2b09b799acb3e061225350970588a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:15 compute-0 sudo[85032]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:15 compute-0 sudo[85184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itnfmomybwytegxnpyhhkqfivmomyozl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712695.6132314-428-26635381085263/AnsiballZ_file.py'
Jan 06 15:18:15 compute-0 sudo[85184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:16 compute-0 python3.9[85186]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:18:16 compute-0 sudo[85184]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:16 compute-0 sudo[85336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxndswxolukhnibwqshzfdzdvvihmfmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712696.4115026-436-100984876178207/AnsiballZ_stat.py'
Jan 06 15:18:16 compute-0 sudo[85336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:16 compute-0 python3.9[85338]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:18:16 compute-0 sudo[85336]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:17 compute-0 sudo[85459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdtvtyasfwwhztqwgwgdgowqpnunqllt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712696.4115026-436-100984876178207/AnsiballZ_copy.py'
Jan 06 15:18:17 compute-0 sudo[85459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:17 compute-0 python3.9[85461]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712696.4115026-436-100984876178207/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b3b451b437c2b09b799acb3e061225350970588a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:17 compute-0 sudo[85459]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:18 compute-0 sudo[85611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwbubgcyldpgnlznqgfyfcrpzlskbfrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712698.0206897-452-270080352698992/AnsiballZ_file.py'
Jan 06 15:18:18 compute-0 sudo[85611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:18 compute-0 python3.9[85613]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:18:18 compute-0 sudo[85611]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:19 compute-0 sudo[85763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pokyjbhkyjumolykynoyeqwfqhkrfhsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712698.8188906-460-163318997378583/AnsiballZ_stat.py'
Jan 06 15:18:19 compute-0 sudo[85763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:19 compute-0 python3.9[85765]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:18:19 compute-0 sudo[85763]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:19 compute-0 sudo[85886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwsbwcwirqcmdmjbwprfviqhgcjjgjie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712698.8188906-460-163318997378583/AnsiballZ_copy.py'
Jan 06 15:18:19 compute-0 sudo[85886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:20 compute-0 python3.9[85888]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712698.8188906-460-163318997378583/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b3b451b437c2b09b799acb3e061225350970588a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:20 compute-0 sudo[85886]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:20 compute-0 sudo[86038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hellwmdqxzymurvdwawccelhbemtcsfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712700.261629-476-122803194021929/AnsiballZ_file.py'
Jan 06 15:18:20 compute-0 sudo[86038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:20 compute-0 python3.9[86040]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:18:20 compute-0 sudo[86038]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:21 compute-0 sudo[86190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dizbbnqqgiwhfascxatqiwkbxxiytujs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712701.1095788-484-93042195131684/AnsiballZ_stat.py'
Jan 06 15:18:21 compute-0 sudo[86190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:21 compute-0 python3.9[86192]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:18:21 compute-0 sudo[86190]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:22 compute-0 sudo[86313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deaktwxtuewnykgeoeibihkesptdmfdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712701.1095788-484-93042195131684/AnsiballZ_copy.py'
Jan 06 15:18:22 compute-0 sudo[86313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:22 compute-0 python3.9[86315]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712701.1095788-484-93042195131684/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b3b451b437c2b09b799acb3e061225350970588a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:22 compute-0 sudo[86313]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:23 compute-0 sudo[86465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apgqcdyhwekcvbjssgesewjidedutouf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712702.603928-500-198980228612554/AnsiballZ_file.py'
Jan 06 15:18:23 compute-0 sudo[86465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:23 compute-0 python3.9[86467]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:18:23 compute-0 sudo[86465]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:23 compute-0 sudo[86617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plimoulxbigozmtpuicxmdracwrsincv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712703.4609861-508-256323233316229/AnsiballZ_stat.py'
Jan 06 15:18:23 compute-0 sudo[86617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:23 compute-0 python3.9[86619]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:18:23 compute-0 sudo[86617]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:24 compute-0 sudo[86740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcczaufulvmlmkzhyxbqzeepcxfvcche ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712703.4609861-508-256323233316229/AnsiballZ_copy.py'
Jan 06 15:18:24 compute-0 sudo[86740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:24 compute-0 python3.9[86742]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712703.4609861-508-256323233316229/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b3b451b437c2b09b799acb3e061225350970588a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:24 compute-0 sudo[86740]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:25 compute-0 sudo[86892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isuzfbdvoesqdlwjslfxllqvzczdjxhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712704.7833214-524-155363902974396/AnsiballZ_file.py'
Jan 06 15:18:25 compute-0 sudo[86892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:25 compute-0 python3.9[86894]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:18:25 compute-0 sudo[86892]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:25 compute-0 sudo[87044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gevcwlyrlezycpzzubttaehsxfffguhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712705.525034-532-60351080132370/AnsiballZ_stat.py'
Jan 06 15:18:25 compute-0 sudo[87044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:25 compute-0 python3.9[87046]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:18:25 compute-0 sudo[87044]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:26 compute-0 sudo[87167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfvdvipryisgtttbwcsgrmtabruizjam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712705.525034-532-60351080132370/AnsiballZ_copy.py'
Jan 06 15:18:26 compute-0 sudo[87167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:26 compute-0 python3.9[87169]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712705.525034-532-60351080132370/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b3b451b437c2b09b799acb3e061225350970588a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:26 compute-0 sudo[87167]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:26 compute-0 sshd-session[77954]: Connection closed by 192.168.122.30 port 58040
Jan 06 15:18:26 compute-0 sshd-session[77951]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:18:26 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 06 15:18:26 compute-0 systemd[1]: session-19.scope: Consumed 40.430s CPU time.
Jan 06 15:18:26 compute-0 systemd-logind[791]: Session 19 logged out. Waiting for processes to exit.
Jan 06 15:18:26 compute-0 systemd-logind[791]: Removed session 19.
Jan 06 15:18:32 compute-0 sshd-session[87194]: Accepted publickey for zuul from 192.168.122.30 port 48762 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 15:18:32 compute-0 systemd-logind[791]: New session 20 of user zuul.
Jan 06 15:18:32 compute-0 systemd[1]: Started Session 20 of User zuul.
Jan 06 15:18:32 compute-0 sshd-session[87194]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:18:33 compute-0 python3.9[87347]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:18:34 compute-0 sudo[87501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhmsglxlvlrdyzwjauznknvytqckgehj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712713.7246926-29-219723641320495/AnsiballZ_file.py'
Jan 06 15:18:34 compute-0 sudo[87501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:34 compute-0 python3.9[87503]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:18:34 compute-0 sudo[87501]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:34 compute-0 sudo[87653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mspggcatclctezgjmxlgygcjlrptfsvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712714.6356769-29-169083054965893/AnsiballZ_file.py'
Jan 06 15:18:34 compute-0 sudo[87653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:35 compute-0 python3.9[87655]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:18:35 compute-0 sudo[87653]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:36 compute-0 python3.9[87805]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:18:37 compute-0 sudo[87955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-virjuqvetaosmfackjimijugcnkxoxwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712716.6955245-52-89009788017886/AnsiballZ_seboolean.py'
Jan 06 15:18:37 compute-0 sudo[87955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:37 compute-0 python3.9[87957]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 06 15:18:39 compute-0 sudo[87955]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:39 compute-0 sudo[88111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khurnxeaofvtwddoviexsnhfziovzsto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712719.356988-62-24439910473109/AnsiballZ_setup.py'
Jan 06 15:18:39 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 06 15:18:39 compute-0 sudo[88111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:40 compute-0 python3.9[88113]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 06 15:18:40 compute-0 sudo[88111]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:40 compute-0 sudo[88195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqlzyyhpzepqmdncritjgidpfyjfuatj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712719.356988-62-24439910473109/AnsiballZ_dnf.py'
Jan 06 15:18:40 compute-0 sudo[88195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:41 compute-0 python3.9[88197]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 06 15:18:43 compute-0 sudo[88195]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:44 compute-0 sudo[88348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnopdplykdieimcfsxagkklbhzztvkoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712723.2831438-74-240635836533415/AnsiballZ_systemd.py'
Jan 06 15:18:44 compute-0 sudo[88348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:44 compute-0 python3.9[88350]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 06 15:18:44 compute-0 sudo[88348]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:45 compute-0 sudo[88503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prwsdgrncfbcmdcsmqxyhyyexocgdhyb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767712724.6132214-82-140543785895265/AnsiballZ_edpm_nftables_snippet.py'
Jan 06 15:18:45 compute-0 sudo[88503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:45 compute-0 python3[88505]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 06 15:18:45 compute-0 sudo[88503]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:45 compute-0 sudo[88655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzkjyfvcgnaiwluwuwxcjxzgbjgjrygz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712725.6236343-91-16845551594634/AnsiballZ_file.py'
Jan 06 15:18:45 compute-0 sudo[88655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:46 compute-0 python3.9[88657]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:46 compute-0 sudo[88655]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:46 compute-0 sudo[88807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iffhxpdzrqovtrwsqrsazcicmaiwdlkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712726.4009168-99-187513421140717/AnsiballZ_stat.py'
Jan 06 15:18:46 compute-0 sudo[88807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:47 compute-0 python3.9[88809]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:18:47 compute-0 sudo[88807]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:47 compute-0 sudo[88885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpwjbipgcalzllgtlllafmjevevliiac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712726.4009168-99-187513421140717/AnsiballZ_file.py'
Jan 06 15:18:47 compute-0 sudo[88885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:47 compute-0 python3.9[88887]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:47 compute-0 sudo[88885]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:48 compute-0 sudo[89037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noqqlojtdqzxqazdafdxwrsckopojstd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712727.9566479-111-103269753425740/AnsiballZ_stat.py'
Jan 06 15:18:48 compute-0 sudo[89037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:48 compute-0 python3.9[89039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:18:48 compute-0 sudo[89037]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:48 compute-0 sudo[89115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvmhydnywumegleurygrxplcwssneqhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712727.9566479-111-103269753425740/AnsiballZ_file.py'
Jan 06 15:18:48 compute-0 sudo[89115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:49 compute-0 python3.9[89117]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.f0qhx7qv recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:49 compute-0 sudo[89115]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:49 compute-0 sudo[89267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zirjcvdvdvdyullpelyoqscceeugodxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712729.2370148-123-221006295561101/AnsiballZ_stat.py'
Jan 06 15:18:49 compute-0 sudo[89267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:49 compute-0 python3.9[89269]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:18:49 compute-0 sudo[89267]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:50 compute-0 sudo[89345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwvjrdvxjpoxnugomnudslfifdnmbwqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712729.2370148-123-221006295561101/AnsiballZ_file.py'
Jan 06 15:18:50 compute-0 sudo[89345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:50 compute-0 python3.9[89347]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:50 compute-0 sudo[89345]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:51 compute-0 sudo[89497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxlzlztifwsgwsqwwqnxckcbowjnalmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712730.8128092-136-259434823482815/AnsiballZ_command.py'
Jan 06 15:18:51 compute-0 sudo[89497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:51 compute-0 python3.9[89499]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:18:51 compute-0 sudo[89497]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:52 compute-0 sudo[89650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioxlbklvubfvunufitkpmcmtngtsqdry ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767712731.716612-144-240055524122909/AnsiballZ_edpm_nftables_from_files.py'
Jan 06 15:18:52 compute-0 sudo[89650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:52 compute-0 python3[89652]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 06 15:18:52 compute-0 sudo[89650]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:53 compute-0 sudo[89802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyqxutyrulvnbfodhqswaolhmajwrpav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712732.6942892-152-127039981273680/AnsiballZ_stat.py'
Jan 06 15:18:53 compute-0 sudo[89802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:53 compute-0 python3.9[89804]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:18:53 compute-0 sudo[89802]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:53 compute-0 sudo[89927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hetkiklhwdbuertzpfiypgsoxyuxarya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712732.6942892-152-127039981273680/AnsiballZ_copy.py'
Jan 06 15:18:53 compute-0 sudo[89927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:53 compute-0 python3.9[89929]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712732.6942892-152-127039981273680/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:53 compute-0 sudo[89927]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:54 compute-0 sudo[90079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvcaaxsywvzpvitxenkuswcwjisqernw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712734.1548555-167-19766918726286/AnsiballZ_stat.py'
Jan 06 15:18:54 compute-0 sudo[90079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:54 compute-0 python3.9[90081]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:18:54 compute-0 sudo[90079]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:55 compute-0 sudo[90204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymwvwuiryunrowivspbqfwkodjqekeqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712734.1548555-167-19766918726286/AnsiballZ_copy.py'
Jan 06 15:18:55 compute-0 sudo[90204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:55 compute-0 python3.9[90206]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712734.1548555-167-19766918726286/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:55 compute-0 sudo[90204]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:55 compute-0 sudo[90356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otbfqngrmyneqpjjsnqjldknhsiqltam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712735.6402535-182-114796795553631/AnsiballZ_stat.py'
Jan 06 15:18:55 compute-0 sudo[90356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:56 compute-0 python3.9[90358]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:18:56 compute-0 sudo[90356]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:56 compute-0 sudo[90481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmtzbqjiadbsxtopaguxazvupdwbioib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712735.6402535-182-114796795553631/AnsiballZ_copy.py'
Jan 06 15:18:56 compute-0 sudo[90481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:56 compute-0 python3.9[90483]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712735.6402535-182-114796795553631/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:56 compute-0 sudo[90481]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:57 compute-0 sudo[90633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iijyvidhwhysqzxeeyhjovkgqcsvbjgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712737.1011417-197-104252481932397/AnsiballZ_stat.py'
Jan 06 15:18:57 compute-0 sudo[90633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:57 compute-0 python3.9[90635]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:18:57 compute-0 sudo[90633]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:58 compute-0 sudo[90758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suayllheqhjhzsrjjiafpnfdvqfwuwqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712737.1011417-197-104252481932397/AnsiballZ_copy.py'
Jan 06 15:18:58 compute-0 sudo[90758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:58 compute-0 python3.9[90760]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712737.1011417-197-104252481932397/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:18:58 compute-0 sudo[90758]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:59 compute-0 sudo[90910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ythkttljuxbsqwlkpscqvuohyfmuajys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712738.7506647-212-215740305617720/AnsiballZ_stat.py'
Jan 06 15:18:59 compute-0 sudo[90910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:59 compute-0 python3.9[90912]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:18:59 compute-0 sudo[90910]: pam_unix(sudo:session): session closed for user root
Jan 06 15:18:59 compute-0 sudo[91035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qodwycodgrcufeukgixknluntummnsdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712738.7506647-212-215740305617720/AnsiballZ_copy.py'
Jan 06 15:18:59 compute-0 sudo[91035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:18:59 compute-0 python3.9[91037]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767712738.7506647-212-215740305617720/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:19:00 compute-0 sudo[91035]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:00 compute-0 sudo[91187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyznxmbhgqqmxuuhbvgfwxmyakdximkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712740.2442763-227-249633238056440/AnsiballZ_file.py'
Jan 06 15:19:00 compute-0 sudo[91187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:00 compute-0 python3.9[91189]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:19:00 compute-0 sudo[91187]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:01 compute-0 sudo[91339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkunsquxqtzsbckbcvcsseiciaxrenbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712740.990345-235-9124724396394/AnsiballZ_command.py'
Jan 06 15:19:01 compute-0 sudo[91339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:01 compute-0 python3.9[91341]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:19:01 compute-0 sudo[91339]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:02 compute-0 sudo[91494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nncodpyxbfyzytdtojqrnqowkbelogvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712741.8157938-243-248506725646851/AnsiballZ_blockinfile.py'
Jan 06 15:19:02 compute-0 sudo[91494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:02 compute-0 python3.9[91496]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:19:02 compute-0 sudo[91494]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:03 compute-0 sudo[91646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jldfnyjulmtuysmghxhisytmchyjmamu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712742.858153-252-94046617159664/AnsiballZ_command.py'
Jan 06 15:19:03 compute-0 sudo[91646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:03 compute-0 python3.9[91648]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:19:03 compute-0 sudo[91646]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:03 compute-0 sudo[91799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihjxsoiytkneadpzakiavvhgbtkvkvvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712743.6454456-260-232434913407050/AnsiballZ_stat.py'
Jan 06 15:19:03 compute-0 sudo[91799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:04 compute-0 python3.9[91801]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:19:04 compute-0 sudo[91799]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:04 compute-0 sudo[91953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oajjtixonzhhggdljgxhbzyatppsdvpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712744.4064527-268-158801265668920/AnsiballZ_command.py'
Jan 06 15:19:04 compute-0 sudo[91953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:04 compute-0 python3.9[91955]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:19:05 compute-0 sudo[91953]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:05 compute-0 sudo[92108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eatkubkqczdpbxahgvmpecqdfgierwqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712745.1759841-276-244704582752564/AnsiballZ_file.py'
Jan 06 15:19:05 compute-0 sudo[92108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:05 compute-0 python3.9[92110]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:19:05 compute-0 sudo[92108]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:07 compute-0 python3.9[92260]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:19:08 compute-0 sudo[92411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhznfbxxzybrikamioistwybbwbaeeso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712747.6827912-316-225957250027727/AnsiballZ_command.py'
Jan 06 15:19:08 compute-0 sudo[92411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:08 compute-0 python3.9[92413]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:9d:bd:06:c0" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:19:08 compute-0 ovs-vsctl[92414]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:9d:bd:06:c0 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 06 15:19:08 compute-0 sudo[92411]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:08 compute-0 sudo[92564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkkwmbeioempsrrpxupgzmhrkczrnsbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712748.546902-325-28295469454285/AnsiballZ_command.py'
Jan 06 15:19:08 compute-0 sudo[92564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:09 compute-0 python3.9[92566]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:19:09 compute-0 sudo[92564]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:09 compute-0 sudo[92719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bawjweivenkkhoxtbiyxfvmplbbhrhyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712749.3126254-333-5387350512508/AnsiballZ_command.py'
Jan 06 15:19:09 compute-0 sudo[92719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:09 compute-0 python3.9[92721]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:19:09 compute-0 ovs-vsctl[92722]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 06 15:19:09 compute-0 sudo[92719]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:10 compute-0 python3.9[92872]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:19:11 compute-0 sudo[93024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xujnlrfnvztsgvukcxwubmkurqwokbay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712750.882309-350-96346727263440/AnsiballZ_file.py'
Jan 06 15:19:11 compute-0 sudo[93024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:11 compute-0 python3.9[93026]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:19:11 compute-0 sudo[93024]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:11 compute-0 sudo[93177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xttrkcrcmudvlppqjcgdoajrcuuuglfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712751.6234381-358-80397886390443/AnsiballZ_stat.py'
Jan 06 15:19:11 compute-0 sudo[93177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:12 compute-0 python3.9[93179]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:19:12 compute-0 sudo[93177]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:12 compute-0 sudo[93255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acgqcslimczyskpigqvbphjmledihsyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712751.6234381-358-80397886390443/AnsiballZ_file.py'
Jan 06 15:19:12 compute-0 sudo[93255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:12 compute-0 python3.9[93257]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:19:12 compute-0 sudo[93255]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:13 compute-0 sudo[93407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipegbuqtlvjsewdvpubjirogznjmohtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712752.900771-358-248717043118333/AnsiballZ_stat.py'
Jan 06 15:19:13 compute-0 sudo[93407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:13 compute-0 python3.9[93409]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:19:13 compute-0 sudo[93407]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:13 compute-0 sudo[93485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhrntkcverucaznlexjsqqhlbzvlrhby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712752.900771-358-248717043118333/AnsiballZ_file.py'
Jan 06 15:19:13 compute-0 sudo[93485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:13 compute-0 python3.9[93487]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:19:13 compute-0 sudo[93485]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:14 compute-0 sudo[93637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tseziqgtaolmuocmpicbzffdxypfkewy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712754.0515432-381-174568167049854/AnsiballZ_file.py'
Jan 06 15:19:14 compute-0 sudo[93637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:14 compute-0 python3.9[93639]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:19:14 compute-0 sudo[93637]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:15 compute-0 sudo[93789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvvrpdrmsqtisuabfofiwytzfmomzpii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712754.8762708-389-93479625846637/AnsiballZ_stat.py'
Jan 06 15:19:15 compute-0 sudo[93789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:15 compute-0 python3.9[93791]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:19:15 compute-0 sudo[93789]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:15 compute-0 sudo[93867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnvlfypmapmodivhzywqwkkqiyoxaoce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712754.8762708-389-93479625846637/AnsiballZ_file.py'
Jan 06 15:19:15 compute-0 sudo[93867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:16 compute-0 python3.9[93869]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:19:16 compute-0 sudo[93867]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:17 compute-0 sudo[94020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnzgklmhzqbvisbwnnmkmupdnuryrnwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712756.270852-401-184923365359799/AnsiballZ_stat.py'
Jan 06 15:19:17 compute-0 sudo[94020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:17 compute-0 python3.9[94022]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:19:17 compute-0 sudo[94020]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:17 compute-0 sudo[94098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqtcrjwajyrbhopoawqeekrpmsrrqcwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712756.270852-401-184923365359799/AnsiballZ_file.py'
Jan 06 15:19:17 compute-0 sudo[94098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:17 compute-0 python3.9[94100]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:19:17 compute-0 sudo[94098]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:18 compute-0 sudo[94250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwgbapyhkwawmwjladydmamjguamygzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712758.0624347-413-113525134393277/AnsiballZ_systemd.py'
Jan 06 15:19:18 compute-0 sudo[94250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:18 compute-0 python3.9[94252]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:19:18 compute-0 systemd[1]: Reloading.
Jan 06 15:19:18 compute-0 systemd-rc-local-generator[94281]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:19:18 compute-0 systemd-sysv-generator[94284]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:19:19 compute-0 sudo[94250]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:19 compute-0 sudo[94440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbzaykawexdaqlmciwjqfbcxmfrcnyln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712759.196054-421-92027573369848/AnsiballZ_stat.py'
Jan 06 15:19:19 compute-0 sudo[94440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:19 compute-0 python3.9[94442]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:19:19 compute-0 sudo[94440]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:20 compute-0 sudo[94518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onyrdtrvoflxxrkcodbbxtzgitzjychq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712759.196054-421-92027573369848/AnsiballZ_file.py'
Jan 06 15:19:20 compute-0 sudo[94518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:20 compute-0 python3.9[94520]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:19:20 compute-0 sudo[94518]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:20 compute-0 sudo[94670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khzwyzexzvsqqeuvxuslipevldckgato ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712760.505379-433-204403108427771/AnsiballZ_stat.py'
Jan 06 15:19:20 compute-0 sudo[94670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:21 compute-0 python3.9[94672]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:19:21 compute-0 sudo[94670]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:21 compute-0 sudo[94748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erhjmcbivsuffsocpomqrqqvjatibldg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712760.505379-433-204403108427771/AnsiballZ_file.py'
Jan 06 15:19:21 compute-0 sudo[94748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:21 compute-0 python3.9[94750]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:19:21 compute-0 sudo[94748]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:22 compute-0 sudo[94900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpfgsiuhyhntefgtznsytgznbqxsnqgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712761.7651277-445-275477131291127/AnsiballZ_systemd.py'
Jan 06 15:19:22 compute-0 sudo[94900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:22 compute-0 python3.9[94902]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:19:22 compute-0 systemd[1]: Reloading.
Jan 06 15:19:22 compute-0 systemd-rc-local-generator[94925]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:19:22 compute-0 systemd-sysv-generator[94929]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:19:22 compute-0 systemd[1]: Starting Create netns directory...
Jan 06 15:19:22 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 06 15:19:22 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 06 15:19:22 compute-0 systemd[1]: Finished Create netns directory.
Jan 06 15:19:22 compute-0 sudo[94900]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:23 compute-0 sudo[95094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzvocksrsyaoknbuwuufbzigjpmcmihl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712763.04256-455-87901074140536/AnsiballZ_file.py'
Jan 06 15:19:23 compute-0 sudo[95094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:23 compute-0 python3.9[95096]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:19:23 compute-0 sudo[95094]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:24 compute-0 sudo[95246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eelasyiibetmqpgyqiwbtttrzhrwngjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712763.838685-463-153773459060778/AnsiballZ_stat.py'
Jan 06 15:19:24 compute-0 sudo[95246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:24 compute-0 python3.9[95248]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:19:24 compute-0 sudo[95246]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:24 compute-0 sudo[95369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsoqgjwuxgtejisnuyqdxkcxvvqpsmaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712763.838685-463-153773459060778/AnsiballZ_copy.py'
Jan 06 15:19:24 compute-0 sudo[95369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:25 compute-0 python3.9[95371]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767712763.838685-463-153773459060778/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:19:25 compute-0 sudo[95369]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:25 compute-0 sudo[95521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfkzcrjblaemtvorruanhwfisdnuxpke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712765.4347734-480-187332618773891/AnsiballZ_file.py'
Jan 06 15:19:25 compute-0 sudo[95521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:25 compute-0 python3.9[95523]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:19:25 compute-0 sudo[95521]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:26 compute-0 sudo[95673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztrljexbwiiuneflhhzzjrgqlhyktaml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712766.2000494-488-93335905161844/AnsiballZ_file.py'
Jan 06 15:19:26 compute-0 sudo[95673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:26 compute-0 python3.9[95675]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:19:26 compute-0 sudo[95673]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:27 compute-0 sudo[95825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaooenxganwprcidmqavwvidjaburygq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712766.996473-496-257818221852220/AnsiballZ_stat.py'
Jan 06 15:19:27 compute-0 sudo[95825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:27 compute-0 python3.9[95827]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:19:27 compute-0 sudo[95825]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:28 compute-0 sudo[95948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggtpiblboqdbjhrqsilcdjzgdcpezoxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712766.996473-496-257818221852220/AnsiballZ_copy.py'
Jan 06 15:19:28 compute-0 sudo[95948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:28 compute-0 python3.9[95950]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1767712766.996473-496-257818221852220/.source.json _original_basename=.2bcipkbi follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:19:28 compute-0 sudo[95948]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:29 compute-0 python3.9[96100]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:19:31 compute-0 sudo[96521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsttwjktdatszcoxbflgtkvertqswgqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712770.835389-536-22775474934179/AnsiballZ_container_config_data.py'
Jan 06 15:19:31 compute-0 sudo[96521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:31 compute-0 python3.9[96523]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 06 15:19:31 compute-0 sudo[96521]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:32 compute-0 sudo[96673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtenzcwzvgyivmsnhabgfvapjmtpnzry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712771.9463081-547-173060316060860/AnsiballZ_container_config_hash.py'
Jan 06 15:19:32 compute-0 sudo[96673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:32 compute-0 python3.9[96675]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 06 15:19:32 compute-0 sudo[96673]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:33 compute-0 sudo[96825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbvyivmffsookvdostkzmecxaoltlzhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712772.953207-556-275228252995957/AnsiballZ_podman_container_info.py'
Jan 06 15:19:33 compute-0 sudo[96825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:33 compute-0 python3.9[96827]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 06 15:19:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:19:33 compute-0 sudo[96825]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:34 compute-0 sudo[96988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqvuuwyylcsdivjlrdyxfwzlljmsbcws ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767712774.2504635-569-13740503643490/AnsiballZ_edpm_container_manage.py'
Jan 06 15:19:34 compute-0 sudo[96988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:35 compute-0 python3[96990]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 06 15:19:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:19:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:19:35 compute-0 podman[97026]: 2026-01-06 15:19:35.282457953 +0000 UTC m=+0.079020888 container create 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 06 15:19:35 compute-0 podman[97026]: 2026-01-06 15:19:35.241967251 +0000 UTC m=+0.038530246 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 06 15:19:35 compute-0 python3[96990]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 06 15:19:35 compute-0 sudo[96988]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:35 compute-0 sudo[97214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khftdkvhzekjvqdwyejaeowbolruhnqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712775.625325-577-170623955906778/AnsiballZ_stat.py'
Jan 06 15:19:35 compute-0 sudo[97214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:36 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 06 15:19:36 compute-0 python3.9[97216]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:19:36 compute-0 sudo[97214]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:36 compute-0 sudo[97368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooytxhbxtcthgwcqljykrhikvblmuifs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712776.5280702-586-103544667083562/AnsiballZ_file.py'
Jan 06 15:19:36 compute-0 sudo[97368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:37 compute-0 python3.9[97370]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:19:37 compute-0 sudo[97368]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:37 compute-0 sudo[97444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cckhhlqmhvrxmfaecbfdpupjfeqtpxew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712776.5280702-586-103544667083562/AnsiballZ_stat.py'
Jan 06 15:19:37 compute-0 sudo[97444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:37 compute-0 python3.9[97446]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:19:37 compute-0 sudo[97444]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:38 compute-0 sudo[97595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzxiayopfnxzvwtfmidwndjfvkgwvesy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712777.7910352-586-74562540706069/AnsiballZ_copy.py'
Jan 06 15:19:38 compute-0 sudo[97595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:38 compute-0 python3.9[97597]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767712777.7910352-586-74562540706069/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:19:38 compute-0 sudo[97595]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:38 compute-0 sudo[97671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juniulyuhpwyvfulzsdgedzgexnkdllz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712777.7910352-586-74562540706069/AnsiballZ_systemd.py'
Jan 06 15:19:38 compute-0 sudo[97671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:39 compute-0 python3.9[97673]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 06 15:19:39 compute-0 systemd[1]: Reloading.
Jan 06 15:19:39 compute-0 systemd-sysv-generator[97703]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:19:39 compute-0 systemd-rc-local-generator[97700]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:19:39 compute-0 sudo[97671]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:39 compute-0 sudo[97781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuoznkaqeyhbraesxfadgirzyiegywwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712777.7910352-586-74562540706069/AnsiballZ_systemd.py'
Jan 06 15:19:39 compute-0 sudo[97781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:40 compute-0 python3.9[97783]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:19:40 compute-0 systemd[1]: Reloading.
Jan 06 15:19:40 compute-0 systemd-rc-local-generator[97808]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:19:40 compute-0 systemd-sysv-generator[97812]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:19:40 compute-0 systemd[1]: Starting ovn_controller container...
Jan 06 15:19:40 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 06 15:19:40 compute-0 systemd[1]: Started libcrun container.
Jan 06 15:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67cba675eb70ca64c4c79485fbed4c67999043a84de88abadd256a2997dd1437/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 06 15:19:40 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2.
Jan 06 15:19:40 compute-0 podman[97824]: 2026-01-06 15:19:40.8423858 +0000 UTC m=+0.187840714 container init 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Jan 06 15:19:40 compute-0 ovn_controller[97840]: + sudo -E kolla_set_configs
Jan 06 15:19:40 compute-0 podman[97824]: 2026-01-06 15:19:40.878439375 +0000 UTC m=+0.223894239 container start 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:19:40 compute-0 edpm-start-podman-container[97824]: ovn_controller
Jan 06 15:19:40 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 06 15:19:40 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 06 15:19:40 compute-0 edpm-start-podman-container[97823]: Creating additional drop-in dependency for "ovn_controller" (79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2)
Jan 06 15:19:40 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 06 15:19:41 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 06 15:19:41 compute-0 systemd[1]: Reloading.
Jan 06 15:19:41 compute-0 podman[97846]: 2026-01-06 15:19:41.039773367 +0000 UTC m=+0.138141199 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 06 15:19:41 compute-0 systemd-rc-local-generator[97914]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:19:41 compute-0 systemd-sysv-generator[97922]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:19:41 compute-0 systemd[1]: 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2-2e2a80d3ea4af384.service: Main process exited, code=exited, status=1/FAILURE
Jan 06 15:19:41 compute-0 systemd[1]: 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2-2e2a80d3ea4af384.service: Failed with result 'exit-code'.
Jan 06 15:19:41 compute-0 systemd[97887]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 06 15:19:41 compute-0 systemd[1]: Started ovn_controller container.
Jan 06 15:19:41 compute-0 sudo[97781]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:41 compute-0 systemd[97887]: Queued start job for default target Main User Target.
Jan 06 15:19:41 compute-0 systemd[97887]: Created slice User Application Slice.
Jan 06 15:19:41 compute-0 systemd[97887]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 06 15:19:41 compute-0 systemd[97887]: Started Daily Cleanup of User's Temporary Directories.
Jan 06 15:19:41 compute-0 systemd[97887]: Reached target Paths.
Jan 06 15:19:41 compute-0 systemd[97887]: Reached target Timers.
Jan 06 15:19:41 compute-0 systemd[97887]: Starting D-Bus User Message Bus Socket...
Jan 06 15:19:41 compute-0 systemd[97887]: Starting Create User's Volatile Files and Directories...
Jan 06 15:19:41 compute-0 systemd[97887]: Listening on D-Bus User Message Bus Socket.
Jan 06 15:19:41 compute-0 systemd[97887]: Reached target Sockets.
Jan 06 15:19:41 compute-0 systemd[97887]: Finished Create User's Volatile Files and Directories.
Jan 06 15:19:41 compute-0 systemd[97887]: Reached target Basic System.
Jan 06 15:19:41 compute-0 systemd[97887]: Reached target Main User Target.
Jan 06 15:19:41 compute-0 systemd[97887]: Startup finished in 136ms.
Jan 06 15:19:41 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 06 15:19:41 compute-0 systemd[1]: Started Session c1 of User root.
Jan 06 15:19:41 compute-0 ovn_controller[97840]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 06 15:19:41 compute-0 ovn_controller[97840]: INFO:__main__:Validating config file
Jan 06 15:19:41 compute-0 ovn_controller[97840]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 06 15:19:41 compute-0 ovn_controller[97840]: INFO:__main__:Writing out command to execute
Jan 06 15:19:41 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 06 15:19:41 compute-0 ovn_controller[97840]: ++ cat /run_command
Jan 06 15:19:41 compute-0 ovn_controller[97840]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 06 15:19:41 compute-0 ovn_controller[97840]: + ARGS=
Jan 06 15:19:41 compute-0 ovn_controller[97840]: + sudo kolla_copy_cacerts
Jan 06 15:19:41 compute-0 systemd[1]: Started Session c2 of User root.
Jan 06 15:19:41 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 06 15:19:41 compute-0 ovn_controller[97840]: + [[ ! -n '' ]]
Jan 06 15:19:41 compute-0 ovn_controller[97840]: + . kolla_extend_start
Jan 06 15:19:41 compute-0 ovn_controller[97840]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 06 15:19:41 compute-0 ovn_controller[97840]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 06 15:19:41 compute-0 ovn_controller[97840]: + umask 0022
Jan 06 15:19:41 compute-0 ovn_controller[97840]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 06 15:19:41 compute-0 NetworkManager[56218]: <info>  [1767712781.5826] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 06 15:19:41 compute-0 NetworkManager[56218]: <info>  [1767712781.5834] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 06 15:19:41 compute-0 NetworkManager[56218]: <warn>  [1767712781.5838] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 06 15:19:41 compute-0 NetworkManager[56218]: <info>  [1767712781.5847] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Jan 06 15:19:41 compute-0 NetworkManager[56218]: <info>  [1767712781.5853] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Jan 06 15:19:41 compute-0 NetworkManager[56218]: <info>  [1767712781.5856] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 06 15:19:41 compute-0 kernel: br-int: entered promiscuous mode
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00010|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00011|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00012|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00013|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00014|features|INFO|OVS Feature: ct_flush, state: supported
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00015|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00016|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00017|main|INFO|OVS feature set changed, force recompute.
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00022|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00023|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00024|main|INFO|OVS feature set changed, force recompute.
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 06 15:19:41 compute-0 NetworkManager[56218]: <info>  [1767712781.5973] manager: (ovn-8c338d-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 06 15:19:41 compute-0 ovn_controller[97840]: 2026-01-06T15:19:41Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 06 15:19:41 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 06 15:19:41 compute-0 systemd-udevd[98030]: Network interface NamePolicy= disabled on kernel command line.
Jan 06 15:19:41 compute-0 systemd-udevd[98029]: Network interface NamePolicy= disabled on kernel command line.
Jan 06 15:19:41 compute-0 NetworkManager[56218]: <info>  [1767712781.6272] device (genev_sys_6081): carrier: link connected
Jan 06 15:19:41 compute-0 NetworkManager[56218]: <info>  [1767712781.6276] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Jan 06 15:19:42 compute-0 python3.9[98108]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 06 15:19:43 compute-0 sudo[98258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfqpqjzkfyvgoplkumpmsfdhgqvwrjkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712782.6471179-627-165106323803872/AnsiballZ_stat.py'
Jan 06 15:19:43 compute-0 sudo[98258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:43 compute-0 python3.9[98260]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:19:43 compute-0 sudo[98258]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:43 compute-0 sudo[98381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cendiisymlgnjpgcjqdnawapllnthxye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712782.6471179-627-165106323803872/AnsiballZ_copy.py'
Jan 06 15:19:43 compute-0 sudo[98381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:43 compute-0 python3.9[98383]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767712782.6471179-627-165106323803872/.source.yaml _original_basename=.a868_lxq follow=False checksum=fb16c1da2fe94865be0090ce0e64f7b3791cd37d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:19:43 compute-0 sudo[98381]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:44 compute-0 sudo[98533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhukcibyscvfdtmepiiqyvhqermudqea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712784.1650689-642-184076387779336/AnsiballZ_command.py'
Jan 06 15:19:44 compute-0 sudo[98533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:44 compute-0 python3.9[98535]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:19:44 compute-0 ovs-vsctl[98536]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 06 15:19:44 compute-0 sudo[98533]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:45 compute-0 sudo[98686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddmtfmyzvdizphopdluqxcqbvqjuvadv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712784.9195876-650-29870710979681/AnsiballZ_command.py'
Jan 06 15:19:45 compute-0 sudo[98686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:45 compute-0 python3.9[98688]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:19:45 compute-0 ovs-vsctl[98690]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 06 15:19:45 compute-0 sudo[98686]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:46 compute-0 sudo[98841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-futdhgfqymbopucijvdszfppxoaluogb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712785.8282168-664-175400849103106/AnsiballZ_command.py'
Jan 06 15:19:46 compute-0 sudo[98841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:46 compute-0 python3.9[98843]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:19:46 compute-0 ovs-vsctl[98844]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 06 15:19:46 compute-0 sudo[98841]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:46 compute-0 sshd-session[87197]: Connection closed by 192.168.122.30 port 48762
Jan 06 15:19:46 compute-0 sshd-session[87194]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:19:46 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Jan 06 15:19:46 compute-0 systemd[1]: session-20.scope: Consumed 54.799s CPU time.
Jan 06 15:19:46 compute-0 systemd-logind[791]: Session 20 logged out. Waiting for processes to exit.
Jan 06 15:19:46 compute-0 systemd-logind[791]: Removed session 20.
Jan 06 15:19:51 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 06 15:19:51 compute-0 systemd[97887]: Activating special unit Exit the Session...
Jan 06 15:19:51 compute-0 systemd[97887]: Stopped target Main User Target.
Jan 06 15:19:51 compute-0 systemd[97887]: Stopped target Basic System.
Jan 06 15:19:51 compute-0 systemd[97887]: Stopped target Paths.
Jan 06 15:19:51 compute-0 systemd[97887]: Stopped target Sockets.
Jan 06 15:19:51 compute-0 systemd[97887]: Stopped target Timers.
Jan 06 15:19:51 compute-0 systemd[97887]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 06 15:19:51 compute-0 systemd[97887]: Closed D-Bus User Message Bus Socket.
Jan 06 15:19:51 compute-0 systemd[97887]: Stopped Create User's Volatile Files and Directories.
Jan 06 15:19:51 compute-0 systemd[97887]: Removed slice User Application Slice.
Jan 06 15:19:51 compute-0 systemd[97887]: Reached target Shutdown.
Jan 06 15:19:51 compute-0 systemd[97887]: Finished Exit the Session.
Jan 06 15:19:51 compute-0 systemd[97887]: Reached target Exit the Session.
Jan 06 15:19:51 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 06 15:19:51 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 06 15:19:51 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 06 15:19:51 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 06 15:19:51 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 06 15:19:51 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 06 15:19:51 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 06 15:19:53 compute-0 sshd-session[98870]: Accepted publickey for zuul from 192.168.122.30 port 54508 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 15:19:53 compute-0 systemd-logind[791]: New session 22 of user zuul.
Jan 06 15:19:53 compute-0 systemd[1]: Started Session 22 of User zuul.
Jan 06 15:19:53 compute-0 sshd-session[98870]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:19:54 compute-0 python3.9[99023]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:19:55 compute-0 sudo[99177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaccwpsirrandhjwlfzkwmbaypdmomku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712794.6359367-29-204084983611926/AnsiballZ_file.py'
Jan 06 15:19:55 compute-0 sudo[99177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:55 compute-0 python3.9[99179]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:19:55 compute-0 sudo[99177]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:55 compute-0 sudo[99329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niiziftecnjsnebbpknpedcmjowrasvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712795.4661274-29-244768368793126/AnsiballZ_file.py'
Jan 06 15:19:55 compute-0 sudo[99329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:55 compute-0 python3.9[99331]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:19:55 compute-0 sudo[99329]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:56 compute-0 sudo[99481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mogakmljmxxnfwibwqmmmbabnocghhaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712796.1267273-29-178300087956366/AnsiballZ_file.py'
Jan 06 15:19:56 compute-0 sudo[99481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:56 compute-0 python3.9[99483]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:19:56 compute-0 sudo[99481]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:57 compute-0 sudo[99633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pidhgketoyvnfjvvycfmlfqbrbiwjuxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712796.803969-29-134500724153768/AnsiballZ_file.py'
Jan 06 15:19:57 compute-0 sudo[99633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:57 compute-0 python3.9[99635]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:19:57 compute-0 sudo[99633]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:57 compute-0 sudo[99785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvyofxyyvchgvictrduumkvtxyzzhptc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712797.4976478-29-147866183939232/AnsiballZ_file.py'
Jan 06 15:19:57 compute-0 sudo[99785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:57 compute-0 python3.9[99787]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:19:57 compute-0 sudo[99785]: pam_unix(sudo:session): session closed for user root
Jan 06 15:19:59 compute-0 python3.9[99937]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:19:59 compute-0 sudo[100088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfrkgvugwzlocdjigwzrefqqulzqrfig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712799.2423263-73-160925585529161/AnsiballZ_seboolean.py'
Jan 06 15:19:59 compute-0 sudo[100088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:19:59 compute-0 python3.9[100090]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 06 15:20:00 compute-0 sudo[100088]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:01 compute-0 python3.9[100240]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:20:02 compute-0 python3.9[100361]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767712800.801581-81-104191524144918/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:20:03 compute-0 python3.9[100511]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:20:03 compute-0 python3.9[100632]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767712802.5665665-96-67027014663640/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:20:06 compute-0 sudo[100782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiugcjdlrtdfmnjloxaqxmaymtdfwkgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712805.707302-113-252723732876310/AnsiballZ_setup.py'
Jan 06 15:20:06 compute-0 sudo[100782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:06 compute-0 python3.9[100784]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 06 15:20:06 compute-0 sudo[100782]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:07 compute-0 sudo[100866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjptypeugcanbvietmepktljovgpqlob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712805.707302-113-252723732876310/AnsiballZ_dnf.py'
Jan 06 15:20:07 compute-0 sudo[100866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:07 compute-0 python3.9[100868]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 06 15:20:08 compute-0 sudo[100866]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:09 compute-0 sudo[101019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bukjoklyvtsxubskoabqsqallazjfymd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712808.7494216-125-201039594406071/AnsiballZ_systemd.py'
Jan 06 15:20:09 compute-0 sudo[101019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:09 compute-0 python3.9[101021]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 06 15:20:10 compute-0 sudo[101019]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:11 compute-0 ovn_controller[97840]: 2026-01-06T15:20:11Z|00025|memory|INFO|16512 kB peak resident set size after 29.9 seconds
Jan 06 15:20:11 compute-0 ovn_controller[97840]: 2026-01-06T15:20:11Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 06 15:20:11 compute-0 podman[101148]: 2026-01-06 15:20:11.528860128 +0000 UTC m=+0.143124029 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 06 15:20:11 compute-0 python3.9[101187]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:20:12 compute-0 python3.9[101320]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767712811.079847-133-252377442493487/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:20:13 compute-0 python3.9[101470]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:20:13 compute-0 python3.9[101591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767712812.4708471-133-168263079543936/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:20:15 compute-0 python3.9[101741]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:20:16 compute-0 python3.9[101862]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767712814.9707263-177-130892108773199/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:20:17 compute-0 python3.9[102012]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:20:17 compute-0 python3.9[102133]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767712816.4886084-177-20492753880576/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:20:18 compute-0 python3.9[102283]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:20:19 compute-0 sudo[102435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwbdqgnursbiytrgneqbdzefgknqikjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712819.1091797-215-200750567872409/AnsiballZ_file.py'
Jan 06 15:20:19 compute-0 sudo[102435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:19 compute-0 python3.9[102437]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:20:19 compute-0 sudo[102435]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:20 compute-0 sudo[102587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndhqxpeidufskogsqyjvbhsrbmdcpbem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712819.849514-223-262583025132331/AnsiballZ_stat.py'
Jan 06 15:20:20 compute-0 sudo[102587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:20 compute-0 python3.9[102589]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:20:20 compute-0 sudo[102587]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:20 compute-0 sudo[102665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqxjajgwuqckrbkyuhkkkldcrboqeuoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712819.849514-223-262583025132331/AnsiballZ_file.py'
Jan 06 15:20:20 compute-0 sudo[102665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:20 compute-0 python3.9[102667]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:20:20 compute-0 sudo[102665]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:21 compute-0 sudo[102817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzrkkjvtasxqssbglxssetkinwlkhgll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712821.1576579-223-216372241730928/AnsiballZ_stat.py'
Jan 06 15:20:21 compute-0 sudo[102817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:21 compute-0 python3.9[102819]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:20:21 compute-0 sudo[102817]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:22 compute-0 sudo[102895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upfbnowzkqzctzwmhrnedtfqjvgbjuqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712821.1576579-223-216372241730928/AnsiballZ_file.py'
Jan 06 15:20:22 compute-0 sudo[102895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:22 compute-0 python3.9[102897]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:20:22 compute-0 sudo[102895]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:22 compute-0 sudo[103047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzesrjosxmlymmwzcgttbucwlrfkraau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712822.505109-246-54062157519576/AnsiballZ_file.py'
Jan 06 15:20:22 compute-0 sudo[103047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:23 compute-0 python3.9[103049]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:20:23 compute-0 sudo[103047]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:23 compute-0 sudo[103199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aysfhtfvpjwupzlzyhvokxdhelcyqndh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712823.2946348-254-210311740921716/AnsiballZ_stat.py'
Jan 06 15:20:23 compute-0 sudo[103199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:23 compute-0 python3.9[103201]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:20:23 compute-0 sudo[103199]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:24 compute-0 sudo[103277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smnaayqvhsetfleezjnbrmyqxmuauvcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712823.2946348-254-210311740921716/AnsiballZ_file.py'
Jan 06 15:20:24 compute-0 sudo[103277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:24 compute-0 python3.9[103279]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:20:24 compute-0 sudo[103277]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:25 compute-0 sudo[103429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihsqvrpvaxfxtemyauszqifhohuirqve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712824.6987522-266-235018379440316/AnsiballZ_stat.py'
Jan 06 15:20:25 compute-0 sudo[103429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:25 compute-0 python3.9[103431]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:20:25 compute-0 sudo[103429]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:26 compute-0 sudo[103507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtvarpncwdsvldwqplkrhwrdwzbcaleh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712824.6987522-266-235018379440316/AnsiballZ_file.py'
Jan 06 15:20:26 compute-0 sudo[103507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:26 compute-0 python3.9[103509]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:20:26 compute-0 sudo[103507]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:27 compute-0 sudo[103659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijufijnsbsmmtesemcyndgggiydtsqpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712826.7280505-278-143288902440983/AnsiballZ_systemd.py'
Jan 06 15:20:27 compute-0 sudo[103659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:27 compute-0 python3.9[103661]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:20:27 compute-0 systemd[1]: Reloading.
Jan 06 15:20:27 compute-0 systemd-rc-local-generator[103692]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:20:27 compute-0 systemd-sysv-generator[103696]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:20:28 compute-0 sudo[103659]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:28 compute-0 sudo[103850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vafmkhiamltyfjkwnklbinorqsgskkyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712828.3416433-286-104462425405403/AnsiballZ_stat.py'
Jan 06 15:20:28 compute-0 sudo[103850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:28 compute-0 python3.9[103852]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:20:28 compute-0 sudo[103850]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:29 compute-0 sudo[103928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwomjvunagvpncqfswjccynfqjsmieci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712828.3416433-286-104462425405403/AnsiballZ_file.py'
Jan 06 15:20:29 compute-0 sudo[103928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:29 compute-0 python3.9[103930]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:20:29 compute-0 sudo[103928]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:30 compute-0 sudo[104080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sduqgngpnnadcmkdjnmoukaboaogvgth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712829.698759-298-20253910444830/AnsiballZ_stat.py'
Jan 06 15:20:30 compute-0 sudo[104080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:30 compute-0 python3.9[104082]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:20:30 compute-0 sudo[104080]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:30 compute-0 sudo[104158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqdyfqagswnmpmrjzdrajlysjquhdhxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712829.698759-298-20253910444830/AnsiballZ_file.py'
Jan 06 15:20:30 compute-0 sudo[104158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:30 compute-0 python3.9[104160]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:20:30 compute-0 sudo[104158]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:31 compute-0 sudo[104310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mptbequcggdgvnabhonyaoiyqdudxiyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712831.0278406-310-40412260220013/AnsiballZ_systemd.py'
Jan 06 15:20:31 compute-0 sudo[104310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:31 compute-0 python3.9[104312]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:20:31 compute-0 systemd[1]: Reloading.
Jan 06 15:20:31 compute-0 systemd-rc-local-generator[104335]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:20:31 compute-0 systemd-sysv-generator[104338]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:20:33 compute-0 systemd[1]: Starting Create netns directory...
Jan 06 15:20:33 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 06 15:20:33 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 06 15:20:33 compute-0 systemd[1]: Finished Create netns directory.
Jan 06 15:20:33 compute-0 sudo[104310]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:34 compute-0 sudo[104504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybcoatkojcdqspxzzgoazgsgbkbyrvcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712834.1529665-320-115761937829372/AnsiballZ_file.py'
Jan 06 15:20:34 compute-0 sudo[104504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:34 compute-0 python3.9[104506]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:20:34 compute-0 sudo[104504]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:35 compute-0 sudo[104656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnpmhwjadidvqeqvslmdpnciyzkluanz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712834.7938783-328-183652677750472/AnsiballZ_stat.py'
Jan 06 15:20:35 compute-0 sudo[104656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:35 compute-0 python3.9[104658]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:20:35 compute-0 sudo[104656]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:35 compute-0 sudo[104779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rymrajurspbyphjxiaizotrzkdmnseyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712834.7938783-328-183652677750472/AnsiballZ_copy.py'
Jan 06 15:20:35 compute-0 sudo[104779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:35 compute-0 python3.9[104781]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767712834.7938783-328-183652677750472/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:20:35 compute-0 sudo[104779]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:36 compute-0 sudo[104931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqzvcuozosrnfmfqnvrosutuiwriktwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712836.3741136-345-214405599080252/AnsiballZ_file.py'
Jan 06 15:20:36 compute-0 sudo[104931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:36 compute-0 python3.9[104933]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:20:36 compute-0 sudo[104931]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:37 compute-0 sudo[105083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxfoxqeqbpnpuwqqpahwkjfqipqtykif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712837.230801-353-11441006855548/AnsiballZ_file.py'
Jan 06 15:20:37 compute-0 sudo[105083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:37 compute-0 python3.9[105085]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:20:37 compute-0 sudo[105083]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:38 compute-0 sudo[105235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wariplajzsogajuqscuxaifdtqcelkhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712837.9954965-361-193617346121826/AnsiballZ_stat.py'
Jan 06 15:20:38 compute-0 sudo[105235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:38 compute-0 python3.9[105237]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:20:38 compute-0 sudo[105235]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:38 compute-0 sudo[105358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmyxwbdalslxgkgihvotjvlykncpgroc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712837.9954965-361-193617346121826/AnsiballZ_copy.py'
Jan 06 15:20:38 compute-0 sudo[105358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:39 compute-0 python3.9[105360]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1767712837.9954965-361-193617346121826/.source.json _original_basename=.mh17bhgr follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:20:39 compute-0 sudo[105358]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:39 compute-0 python3.9[105510]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:20:41 compute-0 podman[105858]: 2026-01-06 15:20:41.864297143 +0000 UTC m=+0.124518983 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:20:42 compute-0 sudo[105958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzwylpimnegxnvkqmcfjmjyinwixysvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712841.553147-401-222120839946382/AnsiballZ_container_config_data.py'
Jan 06 15:20:42 compute-0 sudo[105958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:42 compute-0 python3.9[105960]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 06 15:20:42 compute-0 sudo[105958]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:43 compute-0 sudo[106110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qydnbptonvjjbpclivgtrpwdqbnztezh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712842.577591-412-217190094125552/AnsiballZ_container_config_hash.py'
Jan 06 15:20:43 compute-0 sudo[106110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:43 compute-0 python3.9[106112]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 06 15:20:43 compute-0 sudo[106110]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:44 compute-0 sudo[106262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpppybwsebvgugqodgxgbdinxmbrxrsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712843.6847374-421-170062451593228/AnsiballZ_podman_container_info.py'
Jan 06 15:20:44 compute-0 sudo[106262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:44 compute-0 python3.9[106264]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 06 15:20:44 compute-0 sudo[106262]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:45 compute-0 sudo[106438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jetwzjroitznrqmolfcgrruhzkaatvbx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767712845.1632957-434-212067526988544/AnsiballZ_edpm_container_manage.py'
Jan 06 15:20:45 compute-0 sudo[106438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:45 compute-0 python3[106440]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 06 15:20:46 compute-0 podman[106479]: 2026-01-06 15:20:46.196392047 +0000 UTC m=+0.055841100 container create 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:20:46 compute-0 podman[106479]: 2026-01-06 15:20:46.164831985 +0000 UTC m=+0.024281058 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 06 15:20:46 compute-0 python3[106440]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 06 15:20:46 compute-0 sudo[106438]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:47 compute-0 sudo[106667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmwwakdttjmsvlrzdldekbjwrztvkxee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712846.654402-442-209698874640812/AnsiballZ_stat.py'
Jan 06 15:20:47 compute-0 sudo[106667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:47 compute-0 python3.9[106669]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:20:47 compute-0 sudo[106667]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:48 compute-0 sudo[106821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kosxhgdtxdxggryyhfkyeoqwgsovzhtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712847.6656892-451-97314414279070/AnsiballZ_file.py'
Jan 06 15:20:48 compute-0 sudo[106821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:48 compute-0 python3.9[106823]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:20:48 compute-0 sudo[106821]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:48 compute-0 sudo[106897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmngouafvgzxqhrngonfellxhkuevjga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712847.6656892-451-97314414279070/AnsiballZ_stat.py'
Jan 06 15:20:48 compute-0 sudo[106897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:48 compute-0 python3.9[106899]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:20:48 compute-0 sudo[106897]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:49 compute-0 sudo[107048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tilqwzkirgmgyksisckbbqbfztghvsok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712848.7635822-451-51503119737373/AnsiballZ_copy.py'
Jan 06 15:20:49 compute-0 sudo[107048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:49 compute-0 python3.9[107050]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767712848.7635822-451-51503119737373/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:20:49 compute-0 sudo[107048]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:49 compute-0 sudo[107124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihciakphzybxafycslepzadbiqphaqvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712848.7635822-451-51503119737373/AnsiballZ_systemd.py'
Jan 06 15:20:49 compute-0 sudo[107124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:50 compute-0 python3.9[107126]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 06 15:20:50 compute-0 systemd[1]: Reloading.
Jan 06 15:20:50 compute-0 systemd-rc-local-generator[107154]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:20:50 compute-0 systemd-sysv-generator[107157]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:20:50 compute-0 sudo[107124]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:50 compute-0 sudo[107235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lanzpiejzpjstasodskyozvxtssteifu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712848.7635822-451-51503119737373/AnsiballZ_systemd.py'
Jan 06 15:20:50 compute-0 sudo[107235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:51 compute-0 python3.9[107237]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:20:51 compute-0 systemd[1]: Reloading.
Jan 06 15:20:51 compute-0 systemd-rc-local-generator[107264]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:20:51 compute-0 systemd-sysv-generator[107267]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:20:51 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 06 15:20:51 compute-0 systemd[1]: Started libcrun container.
Jan 06 15:20:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6831e424a45c5f758b81fcbd9d39c61b4978edb94e768dbed8075a2a626cadb/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 06 15:20:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6831e424a45c5f758b81fcbd9d39c61b4978edb94e768dbed8075a2a626cadb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 06 15:20:51 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487.
Jan 06 15:20:51 compute-0 podman[107278]: 2026-01-06 15:20:51.616997971 +0000 UTC m=+0.158093568 container init 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: + sudo -E kolla_set_configs
Jan 06 15:20:51 compute-0 podman[107278]: 2026-01-06 15:20:51.644896515 +0000 UTC m=+0.185992042 container start 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 06 15:20:51 compute-0 edpm-start-podman-container[107278]: ovn_metadata_agent
Jan 06 15:20:51 compute-0 edpm-start-podman-container[107277]: Creating additional drop-in dependency for "ovn_metadata_agent" (7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487)
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: INFO:__main__:Validating config file
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: INFO:__main__:Copying service configuration files
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: INFO:__main__:Writing out command to execute
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 06 15:20:51 compute-0 systemd[1]: Reloading.
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: ++ cat /run_command
Jan 06 15:20:51 compute-0 podman[107299]: 2026-01-06 15:20:51.749928857 +0000 UTC m=+0.087518296 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: + CMD=neutron-ovn-metadata-agent
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: + ARGS=
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: + sudo kolla_copy_cacerts
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: Running command: 'neutron-ovn-metadata-agent'
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: + [[ ! -n '' ]]
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: + . kolla_extend_start
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: + umask 0022
Jan 06 15:20:51 compute-0 ovn_metadata_agent[107293]: + exec neutron-ovn-metadata-agent
Jan 06 15:20:51 compute-0 systemd-rc-local-generator[107371]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:20:51 compute-0 systemd-sysv-generator[107374]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:20:51 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 06 15:20:52 compute-0 sudo[107235]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:52 compute-0 python3.9[107528]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 06 15:20:53 compute-0 sudo[107678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnzzbxvodtxssjjfbrjcltpyypmudimf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712853.243298-492-248681229548314/AnsiballZ_stat.py'
Jan 06 15:20:53 compute-0 sudo[107678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.602 107298 INFO neutron.common.config [-] Logging enabled!
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.602 107298 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.602 107298 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.603 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.603 107298 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.603 107298 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.603 107298 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.603 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.603 107298 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.603 107298 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.603 107298 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.604 107298 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.604 107298 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.604 107298 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.604 107298 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.604 107298 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.604 107298 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.604 107298 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.604 107298 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.605 107298 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.605 107298 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.605 107298 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.605 107298 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.605 107298 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.605 107298 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.605 107298 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.605 107298 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.605 107298 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.606 107298 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.606 107298 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.606 107298 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.606 107298 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.606 107298 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.606 107298 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.606 107298 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.606 107298 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.606 107298 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.607 107298 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.607 107298 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.607 107298 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.607 107298 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.607 107298 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.607 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.607 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.607 107298 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.608 107298 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.608 107298 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.608 107298 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.608 107298 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.608 107298 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.608 107298 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.608 107298 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.608 107298 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.608 107298 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.608 107298 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.609 107298 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.609 107298 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.609 107298 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.609 107298 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.609 107298 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.609 107298 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.609 107298 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.609 107298 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.609 107298 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.610 107298 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.610 107298 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.610 107298 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.610 107298 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.610 107298 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.610 107298 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.610 107298 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.610 107298 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.610 107298 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.611 107298 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.611 107298 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.611 107298 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.611 107298 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.611 107298 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.611 107298 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.611 107298 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.611 107298 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.611 107298 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.612 107298 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.612 107298 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.612 107298 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.612 107298 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.612 107298 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.612 107298 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.612 107298 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.612 107298 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.612 107298 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.613 107298 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.613 107298 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.613 107298 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.613 107298 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.613 107298 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.613 107298 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.613 107298 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.613 107298 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.613 107298 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.613 107298 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.613 107298 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.614 107298 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.614 107298 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.614 107298 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.614 107298 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.614 107298 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.614 107298 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.614 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.614 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.614 107298 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.615 107298 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.615 107298 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.615 107298 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.615 107298 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.615 107298 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.615 107298 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.615 107298 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.615 107298 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.616 107298 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.616 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.616 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.616 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.616 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.616 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.616 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.617 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.617 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.617 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.617 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.617 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.617 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.617 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.617 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.617 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.618 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.618 107298 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.618 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.618 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.618 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.618 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.618 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.618 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.619 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.619 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.619 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.619 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.619 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.619 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.619 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.619 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.619 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.620 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.620 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.620 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.620 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.620 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.620 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.620 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.620 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.620 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.621 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.621 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.621 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.621 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.621 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.621 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.621 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.622 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.622 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.622 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.622 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.622 107298 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.622 107298 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.622 107298 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.623 107298 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.623 107298 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.623 107298 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.623 107298 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.623 107298 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.623 107298 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.623 107298 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.624 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.624 107298 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.624 107298 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.624 107298 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.624 107298 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.624 107298 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.624 107298 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.624 107298 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.625 107298 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.625 107298 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.625 107298 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.625 107298 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.625 107298 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.625 107298 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.625 107298 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.625 107298 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.625 107298 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.626 107298 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.626 107298 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.626 107298 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.626 107298 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.626 107298 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.626 107298 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.626 107298 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.626 107298 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.627 107298 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.627 107298 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.627 107298 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.627 107298 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.627 107298 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.627 107298 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.627 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.627 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.628 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.628 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.628 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.628 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.628 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.628 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.628 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.629 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.629 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.629 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.629 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.629 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.629 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.629 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.629 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.630 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.630 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.630 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.630 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.630 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.630 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.630 107298 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.630 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.631 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.631 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.631 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.631 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.631 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.631 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.631 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.631 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.632 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.632 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.632 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.632 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.632 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.632 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.632 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.632 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.633 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.633 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.633 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.633 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.633 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.633 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.633 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.633 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.633 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.634 107298 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.634 107298 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.634 107298 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.634 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.634 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.634 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.634 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.634 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.634 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.635 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.635 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.635 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.635 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.635 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.635 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.635 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.635 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.636 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.636 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.636 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.636 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.636 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.636 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.636 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.636 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.637 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.637 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.637 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.637 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.637 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.637 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.637 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.638 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.638 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.638 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.638 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.638 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.638 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.638 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.638 107298 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.638 107298 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.650 107298 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.650 107298 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.650 107298 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.651 107298 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.651 107298 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.665 107298 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name c958bb1c-18b4-4d04-b6d7-d8a86dfc32de (UUID: c958bb1c-18b4-4d04-b6d7-d8a86dfc32de) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.690 107298 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.690 107298 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.690 107298 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.690 107298 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.693 107298 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.699 107298 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.704 107298 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'c958bb1c-18b4-4d04-b6d7-d8a86dfc32de'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7ff658e72610>], external_ids={}, name=c958bb1c-18b4-4d04-b6d7-d8a86dfc32de, nb_cfg_timestamp=1767712789604, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.705 107298 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7ff658e720d0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.706 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.706 107298 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.706 107298 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.707 107298 INFO oslo_service.service [-] Starting 1 workers
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.712 107298 DEBUG oslo_service.service [-] Started child 107681 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.716 107298 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpa2d_uacb/privsep.sock']
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.720 107681 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-2006565'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.754 107681 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.755 107681 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.755 107681 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.759 107681 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.766 107681 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 06 15:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:53.773 107681 INFO eventlet.wsgi.server [-] (107681) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 06 15:20:53 compute-0 python3.9[107680]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:20:53 compute-0 sudo[107678]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:54 compute-0 sudo[107808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwpfjgjffnofjngvybxzojcqrzhzqkty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712853.243298-492-248681229548314/AnsiballZ_copy.py'
Jan 06 15:20:54 compute-0 sudo[107808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:20:54 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 06 15:20:54 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:54.466 107298 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 06 15:20:54 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:54.467 107298 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpa2d_uacb/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 06 15:20:54 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:54.320 107811 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 06 15:20:54 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:54.328 107811 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 06 15:20:54 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:54.332 107811 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 06 15:20:54 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:54.332 107811 INFO oslo.privsep.daemon [-] privsep daemon running as pid 107811
Jan 06 15:20:54 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:54.471 107811 DEBUG oslo.privsep.daemon [-] privsep: reply[eac218dc-5652-4af6-a787-30162b5ce961]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 06 15:20:54 compute-0 python3.9[107810]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767712853.243298-492-248681229548314/.source.yaml _original_basename=.4ywlrlcd follow=False checksum=c964d05d1ab8af70a0cc28806680e94c222183ff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:20:54 compute-0 sudo[107808]: pam_unix(sudo:session): session closed for user root
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.018 107811 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.018 107811 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.019 107811 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:20:55 compute-0 sshd-session[98873]: Connection closed by 192.168.122.30 port 54508
Jan 06 15:20:55 compute-0 sshd-session[98870]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:20:55 compute-0 systemd-logind[791]: Session 22 logged out. Waiting for processes to exit.
Jan 06 15:20:55 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Jan 06 15:20:55 compute-0 systemd[1]: session-22.scope: Consumed 41.335s CPU time.
Jan 06 15:20:55 compute-0 systemd-logind[791]: Removed session 22.
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.536 107811 DEBUG oslo.privsep.daemon [-] privsep: reply[79888aed-237a-4c50-a517-06264dda6cdd]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.538 107298 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=c958bb1c-18b4-4d04-b6d7-d8a86dfc32de, column=external_ids, values=({'neutron:ovn-metadata-id': '9702c32a-e7a6-56fb-9d10-0c279d5f1651'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.553 107298 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c958bb1c-18b4-4d04-b6d7-d8a86dfc32de, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.559 107298 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.559 107298 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.561 107298 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.562 107298 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.562 107298 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.562 107298 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.562 107298 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.562 107298 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.562 107298 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.562 107298 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.563 107298 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.563 107298 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.563 107298 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.563 107298 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.563 107298 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.563 107298 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.563 107298 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.563 107298 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.564 107298 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.564 107298 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.564 107298 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.564 107298 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.564 107298 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.564 107298 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.564 107298 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.564 107298 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.565 107298 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.565 107298 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.565 107298 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.565 107298 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.565 107298 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.565 107298 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.565 107298 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.565 107298 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.566 107298 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.566 107298 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.566 107298 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.566 107298 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.566 107298 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.566 107298 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.566 107298 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.566 107298 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.567 107298 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.567 107298 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.567 107298 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.567 107298 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.567 107298 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.567 107298 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.567 107298 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.567 107298 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.567 107298 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.567 107298 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.568 107298 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.568 107298 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.568 107298 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.568 107298 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.568 107298 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.568 107298 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.568 107298 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.568 107298 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.568 107298 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.568 107298 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.569 107298 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.569 107298 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.569 107298 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.569 107298 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.569 107298 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.569 107298 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.569 107298 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.569 107298 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.569 107298 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.570 107298 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.570 107298 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.570 107298 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.570 107298 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.570 107298 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.570 107298 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.570 107298 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.570 107298 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.570 107298 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.570 107298 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.571 107298 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.571 107298 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.571 107298 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.571 107298 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.571 107298 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.571 107298 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.571 107298 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.571 107298 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.572 107298 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.572 107298 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.572 107298 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.572 107298 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.572 107298 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.572 107298 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.572 107298 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.572 107298 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.572 107298 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.573 107298 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.573 107298 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.573 107298 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.573 107298 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.573 107298 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.573 107298 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.573 107298 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.573 107298 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.573 107298 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.573 107298 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.574 107298 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.574 107298 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.574 107298 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.574 107298 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.574 107298 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.574 107298 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.574 107298 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.574 107298 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.574 107298 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.575 107298 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.575 107298 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.575 107298 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.575 107298 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.575 107298 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.575 107298 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.575 107298 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.575 107298 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.576 107298 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.576 107298 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.576 107298 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.576 107298 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.576 107298 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.576 107298 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.576 107298 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.576 107298 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.576 107298 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.577 107298 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.577 107298 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.577 107298 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.577 107298 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.577 107298 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.577 107298 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.577 107298 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.577 107298 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.578 107298 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.578 107298 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.578 107298 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.578 107298 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.578 107298 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.578 107298 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.578 107298 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.578 107298 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.578 107298 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.578 107298 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.579 107298 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.579 107298 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.579 107298 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.579 107298 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.579 107298 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.579 107298 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.579 107298 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.579 107298 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.579 107298 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.579 107298 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.580 107298 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.580 107298 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.580 107298 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.580 107298 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.580 107298 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.580 107298 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.580 107298 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.580 107298 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.580 107298 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.581 107298 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.581 107298 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.581 107298 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.581 107298 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.581 107298 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.581 107298 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.581 107298 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.581 107298 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.581 107298 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.582 107298 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.582 107298 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.582 107298 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.582 107298 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.582 107298 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.582 107298 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.582 107298 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.582 107298 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.582 107298 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.583 107298 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.583 107298 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.583 107298 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.583 107298 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.583 107298 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.583 107298 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.583 107298 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.583 107298 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.584 107298 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.584 107298 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.584 107298 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.584 107298 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.584 107298 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.584 107298 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.584 107298 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.584 107298 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.585 107298 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.585 107298 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.585 107298 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.585 107298 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.585 107298 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.585 107298 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.585 107298 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.586 107298 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.586 107298 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.586 107298 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.586 107298 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.586 107298 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.586 107298 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.586 107298 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.586 107298 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.586 107298 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.587 107298 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.587 107298 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.587 107298 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.587 107298 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.587 107298 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.587 107298 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.587 107298 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.587 107298 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.588 107298 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.588 107298 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.588 107298 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.588 107298 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.588 107298 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.588 107298 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.588 107298 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.588 107298 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.589 107298 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.589 107298 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.589 107298 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.589 107298 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.589 107298 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.589 107298 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.589 107298 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.589 107298 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.589 107298 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.590 107298 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.590 107298 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.590 107298 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.590 107298 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.590 107298 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.590 107298 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.590 107298 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.590 107298 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.591 107298 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.591 107298 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.591 107298 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.591 107298 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.591 107298 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.591 107298 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.591 107298 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.591 107298 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.591 107298 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.592 107298 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.592 107298 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.592 107298 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.592 107298 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.592 107298 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.592 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.592 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.592 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.593 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.593 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.593 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.593 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.593 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.593 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.593 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.593 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.594 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.594 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.594 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.594 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.594 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.594 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.594 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.595 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.595 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.595 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.595 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.595 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.595 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.595 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.596 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.596 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.596 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.596 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.596 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.596 107298 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.596 107298 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.597 107298 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.597 107298 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.597 107298 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:20:55 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:20:55.597 107298 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 06 15:21:01 compute-0 sshd-session[107840]: Accepted publickey for zuul from 192.168.122.30 port 50164 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 15:21:01 compute-0 systemd-logind[791]: New session 23 of user zuul.
Jan 06 15:21:01 compute-0 systemd[1]: Started Session 23 of User zuul.
Jan 06 15:21:01 compute-0 sshd-session[107840]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:21:02 compute-0 python3.9[107993]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:21:03 compute-0 sudo[108147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvmrttdefpfcvaipwpcrdegxeuihnsfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712863.193663-29-251108602435039/AnsiballZ_command.py'
Jan 06 15:21:03 compute-0 sudo[108147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:04 compute-0 python3.9[108149]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:21:04 compute-0 sudo[108147]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:05 compute-0 sudo[108312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmnjsipetouazusnepurryniaeuhtczi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712864.5477092-40-2077964418196/AnsiballZ_systemd_service.py'
Jan 06 15:21:05 compute-0 sudo[108312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:05 compute-0 python3.9[108314]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 06 15:21:05 compute-0 systemd[1]: Reloading.
Jan 06 15:21:05 compute-0 systemd-rc-local-generator[108340]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:21:05 compute-0 systemd-sysv-generator[108345]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:21:05 compute-0 sudo[108312]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:06 compute-0 python3.9[108499]: ansible-ansible.builtin.service_facts Invoked
Jan 06 15:21:06 compute-0 network[108516]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 06 15:21:06 compute-0 network[108517]: 'network-scripts' will be removed from distribution in near future.
Jan 06 15:21:06 compute-0 network[108518]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 06 15:21:11 compute-0 sudo[108777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycjedsvpmmggcdsgmntjmckwtvbwydyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712871.4027348-59-269705846507978/AnsiballZ_systemd_service.py'
Jan 06 15:21:11 compute-0 sudo[108777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:12 compute-0 python3.9[108779]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:21:12 compute-0 sudo[108777]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:12 compute-0 podman[108781]: 2026-01-06 15:21:12.270147137 +0000 UTC m=+0.112075035 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 06 15:21:12 compute-0 sudo[108957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcpdijbgnacbeepongounuzaclwmkkok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712872.3292217-59-92434059413977/AnsiballZ_systemd_service.py'
Jan 06 15:21:12 compute-0 sudo[108957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:13 compute-0 python3.9[108959]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:21:13 compute-0 sudo[108957]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:13 compute-0 sudo[109110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frucvihbmsysnwpqolweydmjjgbnibip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712873.2756279-59-133135807641461/AnsiballZ_systemd_service.py'
Jan 06 15:21:13 compute-0 sudo[109110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:13 compute-0 python3.9[109112]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:21:15 compute-0 sudo[109110]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:17 compute-0 sudo[109263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjzsnmdgtwzovhvlmveuoidjumlaukqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712876.832102-59-261646011490210/AnsiballZ_systemd_service.py'
Jan 06 15:21:17 compute-0 sudo[109263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:17 compute-0 python3.9[109265]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:21:17 compute-0 sudo[109263]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:18 compute-0 sudo[109416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ludhykcdbdhcrqvjiahxahuizenozenp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712877.6884985-59-37593873545043/AnsiballZ_systemd_service.py'
Jan 06 15:21:18 compute-0 sudo[109416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:18 compute-0 python3.9[109418]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:21:18 compute-0 sudo[109416]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:19 compute-0 sudo[109569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eowiokmzfswjsuocvuqdfhjxntyhtxff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712878.6438491-59-225212338083820/AnsiballZ_systemd_service.py'
Jan 06 15:21:19 compute-0 sudo[109569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:19 compute-0 python3.9[109571]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:21:19 compute-0 sudo[109569]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:20 compute-0 sudo[109722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohfplxdubgebkqqidoruysbhxmlkguyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712879.6221256-59-173604275746663/AnsiballZ_systemd_service.py'
Jan 06 15:21:20 compute-0 sudo[109722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:20 compute-0 python3.9[109724]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:21:20 compute-0 sudo[109722]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:21 compute-0 sudo[109875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbzzsyutvnexjchyjlapeipmmjwhjqjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712880.752497-111-55673101359325/AnsiballZ_file.py'
Jan 06 15:21:21 compute-0 sudo[109875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:21 compute-0 python3.9[109877]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:21:21 compute-0 sudo[109875]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:22 compute-0 sudo[110033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnddupnkumllnmrmbmtsrzevrqtpqcqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712881.7136583-111-161738206213726/AnsiballZ_file.py'
Jan 06 15:21:22 compute-0 sudo[110033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:22 compute-0 podman[110001]: 2026-01-06 15:21:22.171209237 +0000 UTC m=+0.107746712 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 06 15:21:22 compute-0 python3.9[110037]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:21:22 compute-0 sudo[110033]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:22 compute-0 sudo[110198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpcycmgdfhpywepnotcrvqrzenbtxyon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712882.4858196-111-233638934384301/AnsiballZ_file.py'
Jan 06 15:21:22 compute-0 sudo[110198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:23 compute-0 python3.9[110200]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:21:23 compute-0 sudo[110198]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:23 compute-0 sudo[110350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeizixontxcejwinxsadwiogsnaseynq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712883.2112625-111-160208066616801/AnsiballZ_file.py'
Jan 06 15:21:23 compute-0 sudo[110350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:23 compute-0 python3.9[110352]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:21:23 compute-0 sudo[110350]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:24 compute-0 sudo[110502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogflcvzjnhcqbwuuvkixnflhknytteli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712883.8801148-111-9311284518465/AnsiballZ_file.py'
Jan 06 15:21:24 compute-0 sudo[110502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:24 compute-0 python3.9[110504]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:21:24 compute-0 sudo[110502]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:24 compute-0 sudo[110654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wswfgsbawkxtzscvwgmzvxxjdhlinydi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712884.6321557-111-112511240943747/AnsiballZ_file.py'
Jan 06 15:21:24 compute-0 sudo[110654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:25 compute-0 python3.9[110656]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:21:25 compute-0 sudo[110654]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:25 compute-0 sudo[110806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxselidtsemibercumjdqittabnkpitm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712885.3730052-111-241827528504261/AnsiballZ_file.py'
Jan 06 15:21:25 compute-0 sudo[110806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:25 compute-0 python3.9[110808]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:21:25 compute-0 sudo[110806]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:26 compute-0 sudo[110958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oymjdgnrjmyrslccmibxkjrrfxqqojkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712886.1191292-161-205040261436706/AnsiballZ_file.py'
Jan 06 15:21:26 compute-0 sudo[110958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:26 compute-0 python3.9[110960]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:21:26 compute-0 sudo[110958]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:27 compute-0 sudo[111110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqwlxpgkwajukbbfmgzajeccyfnciyyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712886.827047-161-112806196124267/AnsiballZ_file.py'
Jan 06 15:21:27 compute-0 sudo[111110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:27 compute-0 python3.9[111112]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:21:27 compute-0 sudo[111110]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:27 compute-0 sudo[111262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dotmpoiapcausgtpvatahlwiqzfneunl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712887.5228934-161-39576231528929/AnsiballZ_file.py'
Jan 06 15:21:27 compute-0 sudo[111262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:28 compute-0 python3.9[111264]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:21:28 compute-0 sudo[111262]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:28 compute-0 sudo[111414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whjzioidehrdsktmpmsncpthhqzpmynu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712888.3270204-161-138630225796553/AnsiballZ_file.py'
Jan 06 15:21:28 compute-0 sudo[111414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:28 compute-0 python3.9[111416]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:21:28 compute-0 sudo[111414]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:29 compute-0 sudo[111566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taavcncwmvexgsknxniesbmfyzltgalg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712889.1325634-161-123378982084011/AnsiballZ_file.py'
Jan 06 15:21:29 compute-0 sudo[111566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:29 compute-0 python3.9[111568]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:21:29 compute-0 sudo[111566]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:30 compute-0 sudo[111718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubpocuunbqysbmmwrvktxzwnitcribqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712889.9550579-161-146905930203561/AnsiballZ_file.py'
Jan 06 15:21:30 compute-0 sudo[111718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:30 compute-0 python3.9[111720]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:21:30 compute-0 sudo[111718]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:31 compute-0 sudo[111870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rryhxjuxpgdjpmkakfsfudjqezoyvioj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712890.6894727-161-269580138665880/AnsiballZ_file.py'
Jan 06 15:21:31 compute-0 sudo[111870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:31 compute-0 python3.9[111872]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:21:31 compute-0 sudo[111870]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:32 compute-0 sudo[112022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciijssodsxhaavpwkormwizihgzbccuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712891.6372893-212-109555005531970/AnsiballZ_command.py'
Jan 06 15:21:32 compute-0 sudo[112022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:32 compute-0 python3.9[112024]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:21:32 compute-0 sudo[112022]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:33 compute-0 python3.9[112176]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 06 15:21:33 compute-0 sudo[112326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wslqnxbkyclgxhieysgppbzdbbdupmpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712893.457137-230-201962848415705/AnsiballZ_systemd_service.py'
Jan 06 15:21:33 compute-0 sudo[112326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:34 compute-0 python3.9[112328]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 06 15:21:34 compute-0 systemd[1]: Reloading.
Jan 06 15:21:34 compute-0 systemd-rc-local-generator[112354]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:21:34 compute-0 systemd-sysv-generator[112358]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:21:34 compute-0 sudo[112326]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:35 compute-0 sudo[112514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhtlvofzjxfvevvahtiwwpubckfkikfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712894.6903594-238-79752078452371/AnsiballZ_command.py'
Jan 06 15:21:35 compute-0 sudo[112514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:35 compute-0 python3.9[112516]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:21:35 compute-0 sudo[112514]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:35 compute-0 sudo[112667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvyyrdqroowdbstcwrbnkppwlmlomhut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712895.4317808-238-35208660792452/AnsiballZ_command.py'
Jan 06 15:21:35 compute-0 sudo[112667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:35 compute-0 python3.9[112669]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:21:35 compute-0 sudo[112667]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:36 compute-0 sudo[112820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpdunxyfogoxzionfklwodytrxawyymx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712896.1423318-238-102450056148394/AnsiballZ_command.py'
Jan 06 15:21:36 compute-0 sudo[112820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:36 compute-0 python3.9[112822]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:21:36 compute-0 sudo[112820]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:37 compute-0 sudo[112973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nosldlhjhjwvlvxqdnnoyhubwjhjtjut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712896.7830288-238-59910515310444/AnsiballZ_command.py'
Jan 06 15:21:37 compute-0 sudo[112973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:37 compute-0 python3.9[112975]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:21:37 compute-0 sudo[112973]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:37 compute-0 sudo[113126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rswdvayiiidhouufatsbmcmcjhzsfmqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712897.4064264-238-236179097984670/AnsiballZ_command.py'
Jan 06 15:21:37 compute-0 sudo[113126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:37 compute-0 python3.9[113128]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:21:37 compute-0 sudo[113126]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:38 compute-0 sudo[113279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbwozzcmvrmiplwfpkjbqvdqhjdlphwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712898.0786526-238-162816588362573/AnsiballZ_command.py'
Jan 06 15:21:38 compute-0 sudo[113279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:38 compute-0 python3.9[113281]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:21:39 compute-0 sudo[113279]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:40 compute-0 sudo[113432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyoeqtepmrjmfesqawupzrirwpqbznzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712899.759729-238-183400709682473/AnsiballZ_command.py'
Jan 06 15:21:40 compute-0 sudo[113432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:40 compute-0 python3.9[113434]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:21:40 compute-0 sudo[113432]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:41 compute-0 sudo[113585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cehfypngmtaoaxkrsrpcsqiqoynycrvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712900.6444438-292-125671496804136/AnsiballZ_getent.py'
Jan 06 15:21:41 compute-0 sudo[113585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:41 compute-0 python3.9[113587]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 06 15:21:41 compute-0 sudo[113585]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:42 compute-0 sudo[113738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgkbdneaoxxappxncqryzpcyowjmuaph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712901.5190847-300-11925682507368/AnsiballZ_group.py'
Jan 06 15:21:42 compute-0 sudo[113738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:42 compute-0 python3.9[113740]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 06 15:21:42 compute-0 groupadd[113741]: group added to /etc/group: name=libvirt, GID=42473
Jan 06 15:21:42 compute-0 groupadd[113741]: group added to /etc/gshadow: name=libvirt
Jan 06 15:21:42 compute-0 groupadd[113741]: new group: name=libvirt, GID=42473
Jan 06 15:21:42 compute-0 sudo[113738]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:42 compute-0 podman[113742]: 2026-01-06 15:21:42.437959367 +0000 UTC m=+0.117926400 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller)
Jan 06 15:21:43 compute-0 sudo[113922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjcfvnhtanxbstqqihichkglwynhotrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712902.5887296-308-55433982933672/AnsiballZ_user.py'
Jan 06 15:21:43 compute-0 sudo[113922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:43 compute-0 python3.9[113924]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 06 15:21:43 compute-0 useradd[113926]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 06 15:21:43 compute-0 sudo[113922]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:44 compute-0 sudo[114082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfxjqfkyptubxvmsezhmbzhbzymmukrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712903.796794-319-231841114695143/AnsiballZ_setup.py'
Jan 06 15:21:44 compute-0 sudo[114082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:44 compute-0 python3.9[114084]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 06 15:21:44 compute-0 sudo[114082]: pam_unix(sudo:session): session closed for user root
Jan 06 15:21:45 compute-0 sudo[114166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnxcueugzlikghhtdrazgwpfmoafurij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767712903.796794-319-231841114695143/AnsiballZ_dnf.py'
Jan 06 15:21:45 compute-0 sudo[114166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:21:46 compute-0 python3.9[114168]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 06 15:21:52 compute-0 podman[114181]: 2026-01-06 15:21:52.814797972 +0000 UTC m=+0.078586857 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 06 15:21:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:21:53.654 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:21:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:21:53.658 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:21:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:21:53.658 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:22:12 compute-0 podman[114379]: 2026-01-06 15:22:12.878793569 +0000 UTC m=+0.145958714 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 06 15:22:20 compute-0 kernel: SELinux:  Converting 2758 SID table entries...
Jan 06 15:22:20 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 06 15:22:20 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 06 15:22:20 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 06 15:22:20 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 06 15:22:20 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 06 15:22:20 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 06 15:22:20 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 06 15:22:23 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 06 15:22:23 compute-0 podman[114413]: 2026-01-06 15:22:23.8426203 +0000 UTC m=+0.089514422 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 06 15:22:30 compute-0 kernel: SELinux:  Converting 2758 SID table entries...
Jan 06 15:22:30 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 06 15:22:30 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 06 15:22:30 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 06 15:22:30 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 06 15:22:30 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 06 15:22:30 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 06 15:22:30 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 06 15:22:43 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 06 15:22:43 compute-0 podman[114835]: 2026-01-06 15:22:43.883494869 +0000 UTC m=+0.127760859 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:22:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:22:53.655 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:22:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:22:53.657 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:22:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:22:53.657 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:22:54 compute-0 podman[120010]: 2026-01-06 15:22:54.815920324 +0000 UTC m=+0.083851306 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:23:14 compute-0 podman[130128]: 2026-01-06 15:23:14.866699338 +0000 UTC m=+0.135868308 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 06 15:23:25 compute-0 podman[131361]: 2026-01-06 15:23:25.84849413 +0000 UTC m=+0.103446273 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:23:34 compute-0 kernel: SELinux:  Converting 2759 SID table entries...
Jan 06 15:23:34 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 06 15:23:34 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 06 15:23:34 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 06 15:23:34 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 06 15:23:34 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 06 15:23:34 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 06 15:23:34 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 06 15:23:35 compute-0 groupadd[131392]: group added to /etc/group: name=dnsmasq, GID=993
Jan 06 15:23:35 compute-0 groupadd[131392]: group added to /etc/gshadow: name=dnsmasq
Jan 06 15:23:35 compute-0 groupadd[131392]: new group: name=dnsmasq, GID=993
Jan 06 15:23:35 compute-0 useradd[131399]: new user: name=dnsmasq, UID=992, GID=993, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 06 15:23:35 compute-0 dbus-broker-launch[741]: Noticed file-system modification, trigger reload.
Jan 06 15:23:35 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 06 15:23:35 compute-0 dbus-broker-launch[741]: Noticed file-system modification, trigger reload.
Jan 06 15:23:36 compute-0 groupadd[131412]: group added to /etc/group: name=clevis, GID=992
Jan 06 15:23:36 compute-0 groupadd[131412]: group added to /etc/gshadow: name=clevis
Jan 06 15:23:36 compute-0 groupadd[131412]: new group: name=clevis, GID=992
Jan 06 15:23:36 compute-0 useradd[131419]: new user: name=clevis, UID=991, GID=992, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 06 15:23:36 compute-0 usermod[131429]: add 'clevis' to group 'tss'
Jan 06 15:23:36 compute-0 usermod[131429]: add 'clevis' to shadow group 'tss'
Jan 06 15:23:39 compute-0 polkitd[43625]: Reloading rules
Jan 06 15:23:39 compute-0 polkitd[43625]: Collecting garbage unconditionally...
Jan 06 15:23:39 compute-0 polkitd[43625]: Loading rules from directory /etc/polkit-1/rules.d
Jan 06 15:23:39 compute-0 polkitd[43625]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 06 15:23:39 compute-0 polkitd[43625]: Finished loading, compiling and executing 3 rules
Jan 06 15:23:39 compute-0 polkitd[43625]: Reloading rules
Jan 06 15:23:39 compute-0 polkitd[43625]: Collecting garbage unconditionally...
Jan 06 15:23:39 compute-0 polkitd[43625]: Loading rules from directory /etc/polkit-1/rules.d
Jan 06 15:23:39 compute-0 polkitd[43625]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 06 15:23:39 compute-0 polkitd[43625]: Finished loading, compiling and executing 3 rules
Jan 06 15:23:40 compute-0 groupadd[131616]: group added to /etc/group: name=ceph, GID=167
Jan 06 15:23:40 compute-0 groupadd[131616]: group added to /etc/gshadow: name=ceph
Jan 06 15:23:40 compute-0 groupadd[131616]: new group: name=ceph, GID=167
Jan 06 15:23:40 compute-0 useradd[131622]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 06 15:23:44 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 06 15:23:44 compute-0 sshd[1006]: Received signal 15; terminating.
Jan 06 15:23:44 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 06 15:23:44 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 06 15:23:44 compute-0 systemd[1]: sshd.service: Consumed 2.372s CPU time, read 32.0K from disk, written 4.0K to disk.
Jan 06 15:23:44 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 06 15:23:44 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 06 15:23:44 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 06 15:23:44 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 06 15:23:44 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 06 15:23:44 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 06 15:23:44 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 06 15:23:44 compute-0 sshd[132141]: Server listening on 0.0.0.0 port 22.
Jan 06 15:23:44 compute-0 sshd[132141]: Server listening on :: port 22.
Jan 06 15:23:44 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 06 15:23:45 compute-0 podman[132173]: 2026-01-06 15:23:45.082454385 +0000 UTC m=+0.167223369 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 06 15:23:47 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 06 15:23:47 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 06 15:23:47 compute-0 systemd[1]: Reloading.
Jan 06 15:23:47 compute-0 systemd-rc-local-generator[132424]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:23:47 compute-0 systemd-sysv-generator[132427]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:23:47 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 06 15:23:51 compute-0 sudo[114166]: pam_unix(sudo:session): session closed for user root
Jan 06 15:23:52 compute-0 sudo[136959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lapbudzlpjnrwmnnfezdlognhcxzynef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713032.0036697-331-278315383097554/AnsiballZ_systemd.py'
Jan 06 15:23:52 compute-0 sudo[136959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:23:52 compute-0 python3.9[136982]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 06 15:23:52 compute-0 systemd[1]: Reloading.
Jan 06 15:23:53 compute-0 systemd-rc-local-generator[137399]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:23:53 compute-0 systemd-sysv-generator[137403]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:23:53 compute-0 sudo[136959]: pam_unix(sudo:session): session closed for user root
Jan 06 15:23:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:23:53.658 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:23:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:23:53.661 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:23:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:23:53.662 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:23:53 compute-0 sudo[138283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuuqisgqedbkzxzdqgjqjlljzkbvowfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713033.5693936-331-114766281899154/AnsiballZ_systemd.py'
Jan 06 15:23:53 compute-0 sudo[138283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:23:54 compute-0 python3.9[138300]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 06 15:23:54 compute-0 systemd[1]: Reloading.
Jan 06 15:23:54 compute-0 systemd-rc-local-generator[138682]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:23:54 compute-0 systemd-sysv-generator[138685]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:23:54 compute-0 sudo[138283]: pam_unix(sudo:session): session closed for user root
Jan 06 15:23:54 compute-0 sudo[139404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mblvokacbijrlkvroilmkntvkrbukgbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713034.630051-331-268454115010426/AnsiballZ_systemd.py'
Jan 06 15:23:54 compute-0 sudo[139404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:23:55 compute-0 python3.9[139425]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 06 15:23:55 compute-0 systemd[1]: Reloading.
Jan 06 15:23:55 compute-0 systemd-sysv-generator[139914]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:23:55 compute-0 systemd-rc-local-generator[139907]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:23:55 compute-0 sudo[139404]: pam_unix(sudo:session): session closed for user root
Jan 06 15:23:56 compute-0 sudo[140706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceakseiylbglwcjpspzcwsmytubzpgsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713035.721194-331-35065126469786/AnsiballZ_systemd.py'
Jan 06 15:23:56 compute-0 podman[140611]: 2026-01-06 15:23:56.088690489 +0000 UTC m=+0.075006250 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Jan 06 15:23:56 compute-0 sudo[140706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:23:56 compute-0 python3.9[140727]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 06 15:23:56 compute-0 systemd[1]: Reloading.
Jan 06 15:23:56 compute-0 systemd-rc-local-generator[141055]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:23:56 compute-0 systemd-sysv-generator[141063]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:23:56 compute-0 sudo[140706]: pam_unix(sudo:session): session closed for user root
Jan 06 15:23:57 compute-0 sudo[141736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wowgxjzpnupozhcjaigydnshagdtxeee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713036.9290307-360-125415877308655/AnsiballZ_systemd.py'
Jan 06 15:23:57 compute-0 sudo[141736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:23:57 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 06 15:23:57 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 06 15:23:57 compute-0 systemd[1]: man-db-cache-update.service: Consumed 12.365s CPU time.
Jan 06 15:23:57 compute-0 systemd[1]: run-rec51abf5c9a7493c97c79cb2f3f9d706.service: Deactivated successfully.
Jan 06 15:23:57 compute-0 python3.9[141739]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 06 15:23:57 compute-0 systemd[1]: Reloading.
Jan 06 15:23:57 compute-0 systemd-sysv-generator[141771]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:23:57 compute-0 systemd-rc-local-generator[141767]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:23:57 compute-0 sudo[141736]: pam_unix(sudo:session): session closed for user root
Jan 06 15:23:58 compute-0 sudo[141927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-komwiyhldiufgojlcedtetacygkogezk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713038.0788894-360-194000270530899/AnsiballZ_systemd.py'
Jan 06 15:23:58 compute-0 sudo[141927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:23:58 compute-0 python3.9[141929]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 06 15:23:58 compute-0 systemd[1]: Reloading.
Jan 06 15:23:58 compute-0 systemd-rc-local-generator[141958]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:23:58 compute-0 systemd-sysv-generator[141963]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:23:59 compute-0 sudo[141927]: pam_unix(sudo:session): session closed for user root
Jan 06 15:23:59 compute-0 sudo[142118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atxjrsgrxvldmcawqxkeodtqqugjizcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713039.2145364-360-110601548087893/AnsiballZ_systemd.py'
Jan 06 15:23:59 compute-0 sudo[142118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:23:59 compute-0 python3.9[142120]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 06 15:23:59 compute-0 systemd[1]: Reloading.
Jan 06 15:24:00 compute-0 systemd-rc-local-generator[142152]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:24:00 compute-0 systemd-sysv-generator[142155]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:24:00 compute-0 sudo[142118]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:00 compute-0 sudo[142309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gohpcsxexfknifbpophvfwfmkxtfxaop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713040.4299152-360-220692985177187/AnsiballZ_systemd.py'
Jan 06 15:24:00 compute-0 sudo[142309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:01 compute-0 python3.9[142311]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 06 15:24:01 compute-0 sudo[142309]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:01 compute-0 sudo[142464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnhkjyjmgnuckcjegychhecchujijzrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713041.3536825-360-229393587804212/AnsiballZ_systemd.py'
Jan 06 15:24:01 compute-0 sudo[142464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:01 compute-0 python3.9[142466]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 06 15:24:02 compute-0 systemd[1]: Reloading.
Jan 06 15:24:02 compute-0 systemd-rc-local-generator[142487]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:24:02 compute-0 systemd-sysv-generator[142491]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:24:02 compute-0 sudo[142464]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:03 compute-0 sudo[142654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwaxmqoaviaezetpglnfqicduqsrigaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713042.6404343-396-258799209057926/AnsiballZ_systemd.py'
Jan 06 15:24:03 compute-0 sudo[142654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:03 compute-0 python3.9[142656]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 06 15:24:03 compute-0 systemd[1]: Reloading.
Jan 06 15:24:03 compute-0 systemd-sysv-generator[142685]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:24:03 compute-0 systemd-rc-local-generator[142681]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:24:03 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 06 15:24:03 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 06 15:24:03 compute-0 sudo[142654]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:04 compute-0 sudo[142847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqgappvtpulozwspbprbsiovybccscdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713043.961962-404-70878729092654/AnsiballZ_systemd.py'
Jan 06 15:24:04 compute-0 sudo[142847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:04 compute-0 python3.9[142849]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 06 15:24:04 compute-0 sudo[142847]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:05 compute-0 sudo[143002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oloepmbyabrvqmstedhgcfsidvgzngau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713044.814426-404-87919590916714/AnsiballZ_systemd.py'
Jan 06 15:24:05 compute-0 sudo[143002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:05 compute-0 python3.9[143004]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 06 15:24:05 compute-0 sudo[143002]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:06 compute-0 sudo[143157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erazrfshqjeqzfhvgcqshushbkpavtrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713045.6909235-404-202622533053068/AnsiballZ_systemd.py'
Jan 06 15:24:06 compute-0 sudo[143157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:06 compute-0 python3.9[143159]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 06 15:24:06 compute-0 sudo[143157]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:06 compute-0 sudo[143312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-impreyujdpxblzzgmcstujbgkkdjfomm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713046.5838242-404-230845162631746/AnsiballZ_systemd.py'
Jan 06 15:24:06 compute-0 sudo[143312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:07 compute-0 python3.9[143314]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 06 15:24:07 compute-0 sudo[143312]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:07 compute-0 sudo[143467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prldqlvwszsoqdniufwewjwhrdqjqqsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713047.3892953-404-190074267471759/AnsiballZ_systemd.py'
Jan 06 15:24:07 compute-0 sudo[143467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:08 compute-0 python3.9[143469]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 06 15:24:08 compute-0 sudo[143467]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:08 compute-0 sudo[143622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azvvqvkzwsdjthuajijtkldsyqoclxcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713048.2960577-404-51351978207697/AnsiballZ_systemd.py'
Jan 06 15:24:08 compute-0 sudo[143622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:08 compute-0 python3.9[143624]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 06 15:24:09 compute-0 sudo[143622]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:09 compute-0 sudo[143777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbrnuqijhdhadabgdgdofgmclbuqinoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713049.1776228-404-5252105992829/AnsiballZ_systemd.py'
Jan 06 15:24:09 compute-0 sudo[143777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:09 compute-0 python3.9[143779]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 06 15:24:10 compute-0 sudo[143777]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:11 compute-0 sudo[143932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxkoxlhmsvwtwhpivjethgshcaiggkqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713050.9697926-404-109548201799244/AnsiballZ_systemd.py'
Jan 06 15:24:11 compute-0 sudo[143932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:11 compute-0 python3.9[143934]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 06 15:24:11 compute-0 sudo[143932]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:12 compute-0 sudo[144087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iukiswiogyespbjymuftykdqnyuvcfqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713051.8237183-404-214363362528482/AnsiballZ_systemd.py'
Jan 06 15:24:12 compute-0 sudo[144087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:12 compute-0 python3.9[144089]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 06 15:24:12 compute-0 sudo[144087]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:13 compute-0 sudo[144242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olmqxyfksltbvthyhjhbfiegugbihwke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713052.7599566-404-233640501084543/AnsiballZ_systemd.py'
Jan 06 15:24:13 compute-0 sudo[144242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:13 compute-0 python3.9[144244]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 06 15:24:13 compute-0 sudo[144242]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:14 compute-0 sudo[144397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvfuehcntnanvqbhkkfntrtsxswjzvzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713053.75734-404-144451010498874/AnsiballZ_systemd.py'
Jan 06 15:24:14 compute-0 sudo[144397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:14 compute-0 python3.9[144399]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 06 15:24:14 compute-0 sudo[144397]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:15 compute-0 sudo[144552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cidxhlsrjdsxgxmmdlufernhtsipepjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713054.6689444-404-270931386690820/AnsiballZ_systemd.py'
Jan 06 15:24:15 compute-0 sudo[144552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:15 compute-0 python3.9[144554]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 06 15:24:15 compute-0 sudo[144552]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:15 compute-0 podman[144556]: 2026-01-06 15:24:15.545229636 +0000 UTC m=+0.134870198 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:24:15 compute-0 sudo[144733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgvmffovgptsfdvclwbrnjigiwfhmmfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713055.6268141-404-210265694108720/AnsiballZ_systemd.py'
Jan 06 15:24:15 compute-0 sudo[144733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:16 compute-0 python3.9[144735]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 06 15:24:17 compute-0 sudo[144733]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:17 compute-0 sudo[144888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iinqdoyofsdkfvmyhipphhrzhvrgapwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713057.4674215-404-265380010798116/AnsiballZ_systemd.py'
Jan 06 15:24:17 compute-0 sudo[144888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:18 compute-0 python3.9[144890]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 06 15:24:18 compute-0 sudo[144888]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:19 compute-0 sudo[145043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsdrhdtkcqnqansjplnkxxbtmegihzup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713059.1735878-506-89476330995542/AnsiballZ_file.py'
Jan 06 15:24:19 compute-0 sudo[145043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:19 compute-0 python3.9[145045]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:24:19 compute-0 sudo[145043]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:20 compute-0 sudo[145195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czyiupuvkqciexiemdcuvcvciibcwafo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713059.852897-506-97796403310649/AnsiballZ_file.py'
Jan 06 15:24:20 compute-0 sudo[145195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:20 compute-0 python3.9[145197]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:24:20 compute-0 sudo[145195]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:20 compute-0 sudo[145347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmbbxscvlfbbvgpqcubgebgvfqkhojaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713060.539677-506-220102122886283/AnsiballZ_file.py'
Jan 06 15:24:20 compute-0 sudo[145347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:21 compute-0 python3.9[145349]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:24:21 compute-0 sudo[145347]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:21 compute-0 sudo[145499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfufiykiszxrtpxooienuebakdmrqmum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713061.2310038-506-280910590487707/AnsiballZ_file.py'
Jan 06 15:24:21 compute-0 sudo[145499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:21 compute-0 python3.9[145501]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:24:21 compute-0 sudo[145499]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:22 compute-0 sudo[145651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebylfcqibuzshlovnrzzdrkzpgpysrin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713061.9071453-506-214438942988509/AnsiballZ_file.py'
Jan 06 15:24:22 compute-0 sudo[145651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:22 compute-0 python3.9[145653]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:24:22 compute-0 sudo[145651]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:22 compute-0 sudo[145803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skmycvlhyyduefcowzahpjlllxpmzrrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713062.5847964-506-28224423774975/AnsiballZ_file.py'
Jan 06 15:24:22 compute-0 sudo[145803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:23 compute-0 python3.9[145805]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:24:23 compute-0 sudo[145803]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:23 compute-0 sudo[145955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsryaavbmhshntaenwvtalvklktpmaqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713063.4159837-549-108174975443285/AnsiballZ_stat.py'
Jan 06 15:24:23 compute-0 sudo[145955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:24 compute-0 python3.9[145957]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:24:24 compute-0 sudo[145955]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:24 compute-0 sudo[146080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjufitoolfzgmskqilnvpnscbofnpdnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713063.4159837-549-108174975443285/AnsiballZ_copy.py'
Jan 06 15:24:24 compute-0 sudo[146080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:25 compute-0 python3.9[146082]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767713063.4159837-549-108174975443285/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:25 compute-0 sudo[146080]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:25 compute-0 sudo[146232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adtttuyynuculjjwvyvofynyigfnkqdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713065.2591298-549-187848607035567/AnsiballZ_stat.py'
Jan 06 15:24:25 compute-0 sudo[146232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:25 compute-0 python3.9[146234]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:24:25 compute-0 sudo[146232]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:26 compute-0 sudo[146367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywexnazizusgymfvhnkzrkwrqmmbbpjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713065.2591298-549-187848607035567/AnsiballZ_copy.py'
Jan 06 15:24:26 compute-0 sudo[146367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:26 compute-0 podman[146331]: 2026-01-06 15:24:26.275194608 +0000 UTC m=+0.082846192 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:24:26 compute-0 python3.9[146378]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767713065.2591298-549-187848607035567/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:26 compute-0 sudo[146367]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:27 compute-0 sudo[146528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paeeitoaxbgowrytmvxjcxpdmytfwkoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713066.6472201-549-29753193980216/AnsiballZ_stat.py'
Jan 06 15:24:27 compute-0 sudo[146528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:27 compute-0 python3.9[146530]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:24:27 compute-0 sudo[146528]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:27 compute-0 sudo[146653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npiwbjhapengjvisdhxkgfqblzmxcqen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713066.6472201-549-29753193980216/AnsiballZ_copy.py'
Jan 06 15:24:27 compute-0 sudo[146653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:27 compute-0 python3.9[146655]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767713066.6472201-549-29753193980216/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:27 compute-0 sudo[146653]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:28 compute-0 sudo[146805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxtsjlrrbnhdrvmqljbpoukjyxgogpvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713068.0835881-549-117849108572593/AnsiballZ_stat.py'
Jan 06 15:24:28 compute-0 sudo[146805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:28 compute-0 python3.9[146807]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:24:28 compute-0 sudo[146805]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:29 compute-0 sudo[146930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruxwxyncuhovqshpywxwlgiijqymrjgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713068.0835881-549-117849108572593/AnsiballZ_copy.py'
Jan 06 15:24:29 compute-0 sudo[146930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:29 compute-0 python3.9[146932]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767713068.0835881-549-117849108572593/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:29 compute-0 sudo[146930]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:29 compute-0 sudo[147082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbhmulobnhunrtyanyigveejxjoblork ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713069.4358003-549-134351174401886/AnsiballZ_stat.py'
Jan 06 15:24:29 compute-0 sudo[147082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:29 compute-0 python3.9[147084]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:24:30 compute-0 sudo[147082]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:30 compute-0 sudo[147207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlgxrrryqewvcqkjgoxkjfvgjbtrmhdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713069.4358003-549-134351174401886/AnsiballZ_copy.py'
Jan 06 15:24:30 compute-0 sudo[147207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:30 compute-0 python3.9[147209]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767713069.4358003-549-134351174401886/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:30 compute-0 sudo[147207]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:31 compute-0 sudo[147359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbksjgmqzxqocjutfyujntbwhozkooac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713070.893758-549-55737885253463/AnsiballZ_stat.py'
Jan 06 15:24:31 compute-0 sudo[147359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:31 compute-0 python3.9[147361]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:24:31 compute-0 sudo[147359]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:31 compute-0 sudo[147484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xodsrkhzpkitekkpxsmfkebupbmigqev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713070.893758-549-55737885253463/AnsiballZ_copy.py'
Jan 06 15:24:31 compute-0 sudo[147484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:32 compute-0 python3.9[147486]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767713070.893758-549-55737885253463/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:32 compute-0 sudo[147484]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:32 compute-0 sudo[147636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jttffwceksolzhjxviqqsrdlrfcxbnrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713072.410019-549-112270266467845/AnsiballZ_stat.py'
Jan 06 15:24:32 compute-0 sudo[147636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:33 compute-0 python3.9[147638]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:24:33 compute-0 sudo[147636]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:33 compute-0 sudo[147759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnouqnneftisltkpbitpepizercgdrqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713072.410019-549-112270266467845/AnsiballZ_copy.py'
Jan 06 15:24:33 compute-0 sudo[147759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:33 compute-0 python3.9[147761]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767713072.410019-549-112270266467845/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:33 compute-0 sudo[147759]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:34 compute-0 sudo[147911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjwzutuciatgnottwujqpamzulivouds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713073.9390244-549-85793536924466/AnsiballZ_stat.py'
Jan 06 15:24:34 compute-0 sudo[147911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:34 compute-0 python3.9[147913]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:24:34 compute-0 sudo[147911]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:35 compute-0 sudo[148036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbcufsfdkejfikayryjujkzthdffellk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713073.9390244-549-85793536924466/AnsiballZ_copy.py'
Jan 06 15:24:35 compute-0 sudo[148036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:35 compute-0 python3.9[148038]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767713073.9390244-549-85793536924466/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:35 compute-0 sudo[148036]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:35 compute-0 sudo[148188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhsbfninealmuvahqqkkluktduqiiygs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713075.4344928-662-160382442663413/AnsiballZ_command.py'
Jan 06 15:24:35 compute-0 sudo[148188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:36 compute-0 python3.9[148190]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 06 15:24:36 compute-0 sudo[148188]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:36 compute-0 sudo[148341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkwkiexqkzebaxarapozglbogfokeadv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713076.5052676-671-6044042993597/AnsiballZ_file.py'
Jan 06 15:24:36 compute-0 sudo[148341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:37 compute-0 python3.9[148343]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:37 compute-0 sudo[148341]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:37 compute-0 sudo[148493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lngnmdwweqgranohogmmypzdxsssgqoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713077.2814288-671-161746795505896/AnsiballZ_file.py'
Jan 06 15:24:37 compute-0 sudo[148493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:37 compute-0 python3.9[148495]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:37 compute-0 sudo[148493]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:38 compute-0 sudo[148645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efaxnplluuvsckmixhklehlwkaqaummo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713077.9406257-671-99703878399299/AnsiballZ_file.py'
Jan 06 15:24:38 compute-0 sudo[148645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:38 compute-0 python3.9[148647]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:38 compute-0 sudo[148645]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:39 compute-0 sudo[148797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ladgxnxikcrzvzpamjjvkqirvskjgdec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713078.7135854-671-82576557854324/AnsiballZ_file.py'
Jan 06 15:24:39 compute-0 sudo[148797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:39 compute-0 python3.9[148799]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:39 compute-0 sudo[148797]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:39 compute-0 sudo[148950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhcsqxofquwcywzouvaqrdgvhijtqqgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713079.3992174-671-107496212560504/AnsiballZ_file.py'
Jan 06 15:24:39 compute-0 sudo[148950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:39 compute-0 python3.9[148952]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:39 compute-0 sudo[148950]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:40 compute-0 sudo[149102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ormfjzipoylhyyjvrkhutxnshvxqtsfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713080.0053878-671-101236088174141/AnsiballZ_file.py'
Jan 06 15:24:40 compute-0 sudo[149102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:40 compute-0 python3.9[149104]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:40 compute-0 sudo[149102]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:41 compute-0 sudo[149254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wllmxxoqppytuiraliywlvlzsvxfjubh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713080.6752286-671-275700229014133/AnsiballZ_file.py'
Jan 06 15:24:41 compute-0 sudo[149254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:41 compute-0 python3.9[149256]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:41 compute-0 sudo[149254]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:41 compute-0 sudo[149406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iozknbhbksseiaocgawziqujwccdykor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713081.376031-671-10445991469901/AnsiballZ_file.py'
Jan 06 15:24:41 compute-0 sudo[149406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:41 compute-0 python3.9[149408]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:41 compute-0 sudo[149406]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:42 compute-0 sudo[149558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qotqnfkktggimnaghtkuoujattjlqsyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713082.0530229-671-147879105302934/AnsiballZ_file.py'
Jan 06 15:24:42 compute-0 sudo[149558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:42 compute-0 python3.9[149560]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:42 compute-0 sudo[149558]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:43 compute-0 sudo[149710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcwerjgtpbjxlgsdtrwicgiwsibqvaos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713082.7412574-671-211845900452380/AnsiballZ_file.py'
Jan 06 15:24:43 compute-0 sudo[149710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:43 compute-0 python3.9[149712]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:43 compute-0 sudo[149710]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:43 compute-0 sudo[149862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdtgtzbyktmsvcwmnkiwslpepxvdloew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713083.4419594-671-60380069784453/AnsiballZ_file.py'
Jan 06 15:24:43 compute-0 sudo[149862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:44 compute-0 python3.9[149864]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:44 compute-0 sudo[149862]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:44 compute-0 sudo[150014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcvqrpnfpogszktsmeagoihyiigkqadw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713084.2138033-671-36331900789901/AnsiballZ_file.py'
Jan 06 15:24:44 compute-0 sudo[150014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:44 compute-0 python3.9[150016]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:44 compute-0 sudo[150014]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:45 compute-0 sudo[150166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awbrhvrfzbnmreyljnpcxtvfscuhoayx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713085.0496013-671-5387258535226/AnsiballZ_file.py'
Jan 06 15:24:45 compute-0 sudo[150166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:45 compute-0 python3.9[150168]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:45 compute-0 sudo[150166]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:45 compute-0 podman[150169]: 2026-01-06 15:24:45.892107158 +0000 UTC m=+0.154648280 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 06 15:24:46 compute-0 sudo[150344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyfcqngbuomijwutwvqkeznmzzpemwps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713085.861435-671-194635574128271/AnsiballZ_file.py'
Jan 06 15:24:46 compute-0 sudo[150344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:46 compute-0 python3.9[150346]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:46 compute-0 sudo[150344]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:47 compute-0 sudo[150496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtrkjseqopxqjfphosafdumzezyewcyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713086.6281092-770-255648138065959/AnsiballZ_stat.py'
Jan 06 15:24:47 compute-0 sudo[150496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:47 compute-0 python3.9[150498]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:24:47 compute-0 sudo[150496]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:47 compute-0 sudo[150619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbrqaynpaphgwevaondxromciqtlwogn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713086.6281092-770-255648138065959/AnsiballZ_copy.py'
Jan 06 15:24:47 compute-0 sudo[150619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:47 compute-0 python3.9[150621]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767713086.6281092-770-255648138065959/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:47 compute-0 sudo[150619]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:48 compute-0 sudo[150771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avtngaxxrrzknhaymfbzclnauoivqcpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713088.0698109-770-105439185151503/AnsiballZ_stat.py'
Jan 06 15:24:48 compute-0 sudo[150771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:48 compute-0 python3.9[150773]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:24:48 compute-0 sudo[150771]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:49 compute-0 sudo[150894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyfazzktfiqymcysoketdmlghhjgtikb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713088.0698109-770-105439185151503/AnsiballZ_copy.py'
Jan 06 15:24:49 compute-0 sudo[150894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:49 compute-0 python3.9[150896]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767713088.0698109-770-105439185151503/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:49 compute-0 sudo[150894]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:49 compute-0 sudo[151046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pylwerflexmapiiktnoynjkflqfdgyzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713089.4092493-770-145744602614265/AnsiballZ_stat.py'
Jan 06 15:24:49 compute-0 sudo[151046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:49 compute-0 python3.9[151048]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:24:49 compute-0 sudo[151046]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:50 compute-0 sudo[151169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfdsqckqnehmjyconjnpoanmdzakkezi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713089.4092493-770-145744602614265/AnsiballZ_copy.py'
Jan 06 15:24:50 compute-0 sudo[151169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:50 compute-0 python3.9[151171]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767713089.4092493-770-145744602614265/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:50 compute-0 sudo[151169]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:51 compute-0 sudo[151321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caeqdypffoahawsxhnensftculjhwvnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713090.8283625-770-48939688787990/AnsiballZ_stat.py'
Jan 06 15:24:51 compute-0 sudo[151321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:51 compute-0 python3.9[151323]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:24:51 compute-0 sudo[151321]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:51 compute-0 sudo[151444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osvjwmssjwfvbtrxyqyhkzncsaxvmqlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713090.8283625-770-48939688787990/AnsiballZ_copy.py'
Jan 06 15:24:51 compute-0 sudo[151444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:51 compute-0 python3.9[151446]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767713090.8283625-770-48939688787990/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:51 compute-0 sudo[151444]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:52 compute-0 sudo[151596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woytderrxaodginotortwckpvasiukkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713092.1376781-770-9257239738769/AnsiballZ_stat.py'
Jan 06 15:24:52 compute-0 sudo[151596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:52 compute-0 python3.9[151598]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:24:52 compute-0 sudo[151596]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:53 compute-0 sudo[151719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnkzossngilpljswgjugdhbemcoastxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713092.1376781-770-9257239738769/AnsiballZ_copy.py'
Jan 06 15:24:53 compute-0 sudo[151719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:53 compute-0 python3.9[151721]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767713092.1376781-770-9257239738769/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:53 compute-0 sudo[151719]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:24:53.659 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:24:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:24:53.661 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:24:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:24:53.661 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:24:53 compute-0 sudo[151871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmgvnvprituyplntmigqcmohojdtaqxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713093.405382-770-170254813804626/AnsiballZ_stat.py'
Jan 06 15:24:53 compute-0 sudo[151871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:54 compute-0 python3.9[151873]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:24:54 compute-0 sudo[151871]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:54 compute-0 sudo[151994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjsnslkntqibcgtbrshfavmcthcbmfju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713093.405382-770-170254813804626/AnsiballZ_copy.py'
Jan 06 15:24:54 compute-0 sudo[151994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:54 compute-0 python3.9[151996]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767713093.405382-770-170254813804626/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:54 compute-0 sudo[151994]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:55 compute-0 sudo[152146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ythldzhkusocbtrerayrkqaxvxuelayb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713094.8887076-770-18152088824932/AnsiballZ_stat.py'
Jan 06 15:24:55 compute-0 sudo[152146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:55 compute-0 python3.9[152148]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:24:55 compute-0 sudo[152146]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:55 compute-0 sudo[152269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvpnqldhprdiekyowptzdgbprlrfwtmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713094.8887076-770-18152088824932/AnsiballZ_copy.py'
Jan 06 15:24:55 compute-0 sudo[152269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:56 compute-0 python3.9[152271]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767713094.8887076-770-18152088824932/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:56 compute-0 sudo[152269]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:56 compute-0 sudo[152437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckjjzwveviygabzmbynzubauhamtvgyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713096.3677964-770-123992277941463/AnsiballZ_stat.py'
Jan 06 15:24:56 compute-0 sudo[152437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:56 compute-0 podman[152395]: 2026-01-06 15:24:56.721516739 +0000 UTC m=+0.095003015 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 06 15:24:56 compute-0 python3.9[152441]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:24:56 compute-0 sudo[152437]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:57 compute-0 sudo[152562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhmrpnzjbjyfjqloszcywxnmtbipyhsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713096.3677964-770-123992277941463/AnsiballZ_copy.py'
Jan 06 15:24:57 compute-0 sudo[152562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:57 compute-0 python3.9[152564]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767713096.3677964-770-123992277941463/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:57 compute-0 sudo[152562]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:57 compute-0 sudo[152714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebwuhfbfhfsredezezowbvjnuxsrkvag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713097.6966877-770-229048033322144/AnsiballZ_stat.py'
Jan 06 15:24:57 compute-0 sudo[152714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:58 compute-0 python3.9[152716]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:24:58 compute-0 sudo[152714]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:58 compute-0 sudo[152837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqhdliwcfmknjtkrwnkkaazfbsaxbftm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713097.6966877-770-229048033322144/AnsiballZ_copy.py'
Jan 06 15:24:58 compute-0 sudo[152837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:58 compute-0 python3.9[152839]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767713097.6966877-770-229048033322144/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:24:58 compute-0 sudo[152837]: pam_unix(sudo:session): session closed for user root
Jan 06 15:24:59 compute-0 sudo[152989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbmshoywwrccxoiewbqlnigxpntloyzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713098.9990854-770-214151790904656/AnsiballZ_stat.py'
Jan 06 15:24:59 compute-0 sudo[152989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:24:59 compute-0 python3.9[152991]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:24:59 compute-0 sudo[152989]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:00 compute-0 sudo[153112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzkkadahejrqzegrbufyvlcmafdaexnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713098.9990854-770-214151790904656/AnsiballZ_copy.py'
Jan 06 15:25:00 compute-0 sudo[153112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:00 compute-0 python3.9[153114]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767713098.9990854-770-214151790904656/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:00 compute-0 sudo[153112]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:00 compute-0 sudo[153264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nknczowvtwwldzbwlywcnllslsoqzgyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713100.4271703-770-206700598773696/AnsiballZ_stat.py'
Jan 06 15:25:00 compute-0 sudo[153264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:00 compute-0 python3.9[153266]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:25:00 compute-0 sudo[153264]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:01 compute-0 sudo[153387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtvjyaexmfxqddcatstqladtwpiydklr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713100.4271703-770-206700598773696/AnsiballZ_copy.py'
Jan 06 15:25:01 compute-0 sudo[153387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:01 compute-0 python3.9[153389]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767713100.4271703-770-206700598773696/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:01 compute-0 sudo[153387]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:02 compute-0 sudo[153539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpwxhkvstoluvznmttifaumbiljawkbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713101.7422466-770-207104426803102/AnsiballZ_stat.py'
Jan 06 15:25:02 compute-0 sudo[153539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:02 compute-0 python3.9[153541]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:25:02 compute-0 sudo[153539]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:02 compute-0 sudo[153662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxkctbhclrhzugeelechuivabbtvgcau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713101.7422466-770-207104426803102/AnsiballZ_copy.py'
Jan 06 15:25:02 compute-0 sudo[153662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:02 compute-0 python3.9[153664]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767713101.7422466-770-207104426803102/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:02 compute-0 sudo[153662]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:03 compute-0 sudo[153814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwivjstcpqvyvvrmtmyldomirtuivynu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713103.0658648-770-133122296054208/AnsiballZ_stat.py'
Jan 06 15:25:03 compute-0 sudo[153814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:03 compute-0 python3.9[153816]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:25:03 compute-0 sudo[153814]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:03 compute-0 sudo[153937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymjhgjevcasmrnakclajfkbfcdurdifs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713103.0658648-770-133122296054208/AnsiballZ_copy.py'
Jan 06 15:25:03 compute-0 sudo[153937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:04 compute-0 python3.9[153939]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767713103.0658648-770-133122296054208/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:04 compute-0 sudo[153937]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:04 compute-0 sudo[154089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubhodkjiaqcoluxnrqqzdmdgsyrikxir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713104.3454008-770-233611925754336/AnsiballZ_stat.py'
Jan 06 15:25:04 compute-0 sudo[154089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:04 compute-0 python3.9[154091]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:25:04 compute-0 sudo[154089]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:05 compute-0 sudo[154212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvnhgserbtulitnlpkvxztfujamqsxoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713104.3454008-770-233611925754336/AnsiballZ_copy.py'
Jan 06 15:25:05 compute-0 sudo[154212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:05 compute-0 python3.9[154214]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767713104.3454008-770-233611925754336/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:05 compute-0 sudo[154212]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:06 compute-0 python3.9[154364]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:25:07 compute-0 sudo[154517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjltfckxavgpqjrlamlunojhkyxjjnga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713106.4898217-976-44658755322502/AnsiballZ_seboolean.py'
Jan 06 15:25:07 compute-0 sudo[154517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:07 compute-0 python3.9[154519]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 06 15:25:08 compute-0 sudo[154517]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:09 compute-0 sudo[154673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rinxzrszuukeyonimrqgjbdkirfkjtry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713109.06562-984-104707652817530/AnsiballZ_copy.py'
Jan 06 15:25:09 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 06 15:25:09 compute-0 sudo[154673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:09 compute-0 python3.9[154675]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:09 compute-0 sudo[154673]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:10 compute-0 sudo[154825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beanmwhkdxxyrnrioavwuwcozqnpwtvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713109.7905269-984-29969592764361/AnsiballZ_copy.py'
Jan 06 15:25:10 compute-0 sudo[154825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:10 compute-0 python3.9[154827]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:10 compute-0 sudo[154825]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:10 compute-0 sudo[154977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukoopnrrqgwllnwbmmldxrsluqnsarpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713110.564388-984-68068680792252/AnsiballZ_copy.py'
Jan 06 15:25:10 compute-0 sudo[154977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:11 compute-0 python3.9[154979]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:11 compute-0 sudo[154977]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:11 compute-0 sudo[155129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbgrczsulhhyjbifqdptmxcolbmfafpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713111.3153982-984-74946483536499/AnsiballZ_copy.py'
Jan 06 15:25:11 compute-0 sudo[155129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:11 compute-0 python3.9[155131]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:11 compute-0 sudo[155129]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:12 compute-0 sudo[155281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfsmavqxsfahhlhjnynpljutpczbdsyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713112.1915705-984-125654142977110/AnsiballZ_copy.py'
Jan 06 15:25:12 compute-0 sudo[155281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:12 compute-0 python3.9[155283]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:12 compute-0 sudo[155281]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:13 compute-0 sudo[155433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiiwpuuidpyifeawcxpacoqufihpdepc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713113.025466-1020-270418525111906/AnsiballZ_copy.py'
Jan 06 15:25:13 compute-0 sudo[155433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:13 compute-0 python3.9[155435]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:13 compute-0 sudo[155433]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:14 compute-0 sudo[155585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihdvgnfccydodmkwapbtqgllbqpwirgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713113.7182229-1020-104222665867029/AnsiballZ_copy.py'
Jan 06 15:25:14 compute-0 sudo[155585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:14 compute-0 python3.9[155587]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:14 compute-0 sudo[155585]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:14 compute-0 sudo[155737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdifyhptwtjextnimdurtqqgvujgbfaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713114.4173946-1020-53993418364135/AnsiballZ_copy.py'
Jan 06 15:25:14 compute-0 sudo[155737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:14 compute-0 python3.9[155739]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:14 compute-0 sudo[155737]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:15 compute-0 sudo[155889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbibfffqtawrlohxikbybuyoyoymgffz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713115.15857-1020-190832627989234/AnsiballZ_copy.py'
Jan 06 15:25:15 compute-0 sudo[155889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:15 compute-0 python3.9[155891]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:15 compute-0 sudo[155889]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:16 compute-0 sudo[156048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btnthgusnguaxpumqjbsqrmqsifvzdhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713115.9976387-1020-71077925306515/AnsiballZ_copy.py'
Jan 06 15:25:16 compute-0 sudo[156048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:16 compute-0 podman[156015]: 2026-01-06 15:25:16.420879759 +0000 UTC m=+0.140326206 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:25:16 compute-0 python3.9[156056]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:16 compute-0 sudo[156048]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:17 compute-0 sudo[156219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzgdpzbrljrlwyiygwthrmqetlwodanq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713116.7564037-1056-208343059299513/AnsiballZ_systemd.py'
Jan 06 15:25:17 compute-0 sudo[156219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:17 compute-0 python3.9[156221]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 15:25:17 compute-0 systemd[1]: Reloading.
Jan 06 15:25:17 compute-0 systemd-rc-local-generator[156247]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:25:17 compute-0 systemd-sysv-generator[156251]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:25:17 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 06 15:25:17 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 06 15:25:17 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 06 15:25:17 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 06 15:25:17 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 06 15:25:17 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 06 15:25:17 compute-0 sudo[156219]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:18 compute-0 sudo[156412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjayifezcvoyopwffegiuuyhzumpkknc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713118.039797-1056-200922387582951/AnsiballZ_systemd.py'
Jan 06 15:25:18 compute-0 sudo[156412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:18 compute-0 python3.9[156414]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 15:25:18 compute-0 systemd[1]: Reloading.
Jan 06 15:25:18 compute-0 systemd-sysv-generator[156444]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:25:18 compute-0 systemd-rc-local-generator[156441]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:25:18 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 06 15:25:19 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 06 15:25:19 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 06 15:25:19 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 06 15:25:19 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 06 15:25:19 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 06 15:25:19 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 06 15:25:19 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 06 15:25:19 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 06 15:25:19 compute-0 sudo[156412]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:19 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 06 15:25:19 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 06 15:25:19 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 06 15:25:19 compute-0 sudo[156633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afbhmeqphxpsdvcddmxooymnlsyzvlmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713119.4823718-1056-224986911749045/AnsiballZ_systemd.py'
Jan 06 15:25:19 compute-0 sudo[156633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:20 compute-0 python3.9[156638]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 15:25:20 compute-0 systemd[1]: Reloading.
Jan 06 15:25:20 compute-0 systemd-rc-local-generator[156662]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:25:20 compute-0 systemd-sysv-generator[156668]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:25:20 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 06 15:25:20 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 06 15:25:20 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 06 15:25:20 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 06 15:25:20 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 06 15:25:20 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 06 15:25:20 compute-0 sudo[156633]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:20 compute-0 setroubleshoot[156476]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 089b3e07-b00d-4935-b983-599c0deea31b
Jan 06 15:25:20 compute-0 setroubleshoot[156476]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 06 15:25:20 compute-0 setroubleshoot[156476]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 089b3e07-b00d-4935-b983-599c0deea31b
Jan 06 15:25:20 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 06 15:25:20 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 06 15:25:20 compute-0 setroubleshoot[156476]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 06 15:25:21 compute-0 sudo[156850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pinciasjubanfxigogqxfnsvzwjwjveh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713120.709294-1056-275804259877443/AnsiballZ_systemd.py'
Jan 06 15:25:21 compute-0 sudo[156850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:21 compute-0 python3.9[156852]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 15:25:21 compute-0 systemd[1]: Reloading.
Jan 06 15:25:21 compute-0 systemd-sysv-generator[156883]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:25:21 compute-0 systemd-rc-local-generator[156879]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:25:21 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 06 15:25:21 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 06 15:25:21 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 06 15:25:21 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 06 15:25:21 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 06 15:25:21 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 06 15:25:21 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 06 15:25:21 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 06 15:25:21 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 06 15:25:21 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 06 15:25:21 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 06 15:25:21 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 06 15:25:21 compute-0 sudo[156850]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:22 compute-0 sudo[157065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vppqzjdevugulugedvnmlxctmoqatzcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713122.0063875-1056-211049809897312/AnsiballZ_systemd.py'
Jan 06 15:25:22 compute-0 sudo[157065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:22 compute-0 python3.9[157067]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 15:25:22 compute-0 systemd[1]: Reloading.
Jan 06 15:25:22 compute-0 systemd-rc-local-generator[157094]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:25:22 compute-0 systemd-sysv-generator[157099]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:25:22 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 06 15:25:22 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 06 15:25:22 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 06 15:25:22 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 06 15:25:22 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 06 15:25:22 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 06 15:25:22 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 06 15:25:23 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 06 15:25:23 compute-0 sudo[157065]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:23 compute-0 sudo[157277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwnwpeqzyqpufmaimthzqtyfhbmqrbog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713123.3783095-1093-154321729248872/AnsiballZ_file.py'
Jan 06 15:25:23 compute-0 sudo[157277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:23 compute-0 python3.9[157279]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:23 compute-0 sudo[157277]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:24 compute-0 sudo[157429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juflwdjdpvlbferqahuxzrevdtipkjty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713124.1578584-1101-244511357898759/AnsiballZ_find.py'
Jan 06 15:25:24 compute-0 sudo[157429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:24 compute-0 python3.9[157431]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 06 15:25:24 compute-0 sudo[157429]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:25 compute-0 sudo[157581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvlooimxuihagockiqmaisptqqzadxql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713125.138598-1115-40266678142323/AnsiballZ_stat.py'
Jan 06 15:25:25 compute-0 sudo[157581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:25 compute-0 python3.9[157583]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:25:25 compute-0 sudo[157581]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:26 compute-0 sudo[157704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rauyusuifjgmbmwdrsvlmuwaltlghqbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713125.138598-1115-40266678142323/AnsiballZ_copy.py'
Jan 06 15:25:26 compute-0 sudo[157704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:26 compute-0 python3.9[157706]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713125.138598-1115-40266678142323/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:26 compute-0 sudo[157704]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:27 compute-0 sudo[157867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhkbzpccvgvpgngffherdgjbxcjeywsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713126.790067-1131-226991025071052/AnsiballZ_file.py'
Jan 06 15:25:27 compute-0 sudo[157867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:27 compute-0 podman[157830]: 2026-01-06 15:25:27.192915547 +0000 UTC m=+0.088100230 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:25:27 compute-0 python3.9[157874]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:27 compute-0 sudo[157867]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:27 compute-0 sudo[158027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyvlvialpflcznkklhpiigjjtdhjevva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713127.5746994-1139-65644820755125/AnsiballZ_stat.py'
Jan 06 15:25:27 compute-0 sudo[158027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:28 compute-0 python3.9[158029]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:25:28 compute-0 sudo[158027]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:28 compute-0 sudo[158105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvdxlrxopevdqifridlhizxuwqdwfncr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713127.5746994-1139-65644820755125/AnsiballZ_file.py'
Jan 06 15:25:28 compute-0 sudo[158105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:28 compute-0 python3.9[158107]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:28 compute-0 sudo[158105]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:29 compute-0 sudo[158257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbdjnsexyjjqwgnfnjetlhyjtlhttkfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713128.9686956-1151-50413290659868/AnsiballZ_stat.py'
Jan 06 15:25:29 compute-0 sudo[158257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:29 compute-0 python3.9[158259]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:25:29 compute-0 sudo[158257]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:29 compute-0 sudo[158335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgcodprlnwpiksfclaoinaiqqvatvdgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713128.9686956-1151-50413290659868/AnsiballZ_file.py'
Jan 06 15:25:29 compute-0 sudo[158335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:30 compute-0 python3.9[158337]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.f8wkyag4 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:30 compute-0 sudo[158335]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:30 compute-0 sudo[158487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntsshslljzebfmaghfxynvjkogvrtcif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713130.3820746-1163-129522778336054/AnsiballZ_stat.py'
Jan 06 15:25:30 compute-0 sudo[158487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:30 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 06 15:25:30 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 06 15:25:30 compute-0 python3.9[158489]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:25:31 compute-0 sudo[158487]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:31 compute-0 sudo[158565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anlnouoqtemktfypjkrnjtmpclwbbmjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713130.3820746-1163-129522778336054/AnsiballZ_file.py'
Jan 06 15:25:31 compute-0 sudo[158565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:31 compute-0 python3.9[158567]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:31 compute-0 sudo[158565]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:32 compute-0 sudo[158717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpgjsonoyurxocylojwahxhxzvwbogcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713131.7373557-1176-173965962313698/AnsiballZ_command.py'
Jan 06 15:25:32 compute-0 sudo[158717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:32 compute-0 python3.9[158719]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:25:32 compute-0 sudo[158717]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:32 compute-0 sudo[158870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rknsxbetyumswknqadpvrfdrncazewiy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767713132.4730177-1184-4643570950665/AnsiballZ_edpm_nftables_from_files.py'
Jan 06 15:25:32 compute-0 sudo[158870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:33 compute-0 python3[158872]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 06 15:25:33 compute-0 sudo[158870]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:33 compute-0 sudo[159022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxxszaotpwlozwrasauwikzpspccsvba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713133.4117675-1192-74458728032100/AnsiballZ_stat.py'
Jan 06 15:25:33 compute-0 sudo[159022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:34 compute-0 python3.9[159024]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:25:34 compute-0 sudo[159022]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:34 compute-0 sudo[159100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhdlyiynkyqmuuegnxqnhbkqkcxqtmew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713133.4117675-1192-74458728032100/AnsiballZ_file.py'
Jan 06 15:25:34 compute-0 sudo[159100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:34 compute-0 python3.9[159102]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:34 compute-0 sudo[159100]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:35 compute-0 sudo[159252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufbeccihgoyllwmwsxwtxccrllowqghw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713134.7693586-1204-133746433966933/AnsiballZ_stat.py'
Jan 06 15:25:35 compute-0 sudo[159252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:35 compute-0 python3.9[159254]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:25:35 compute-0 sudo[159252]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:35 compute-0 sudo[159330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpvfnvhggnwkbdtwtonmcjojjzwymxia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713134.7693586-1204-133746433966933/AnsiballZ_file.py'
Jan 06 15:25:35 compute-0 sudo[159330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:35 compute-0 python3.9[159332]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:35 compute-0 sudo[159330]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:36 compute-0 sudo[159482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmyetojymsosewpixxeirphngdajyxlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713135.970144-1216-67078687590789/AnsiballZ_stat.py'
Jan 06 15:25:36 compute-0 sudo[159482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:36 compute-0 python3.9[159484]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:25:36 compute-0 sudo[159482]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:36 compute-0 sudo[159560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgiwdqtapbghifeyziesexvkgvjvvfgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713135.970144-1216-67078687590789/AnsiballZ_file.py'
Jan 06 15:25:36 compute-0 sudo[159560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:37 compute-0 python3.9[159562]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:37 compute-0 sudo[159560]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:37 compute-0 sudo[159712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtlbfnvdpehjcajxvhtrrmirxikpjwoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713137.2100618-1228-39851116223355/AnsiballZ_stat.py'
Jan 06 15:25:37 compute-0 sudo[159712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:37 compute-0 python3.9[159714]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:25:37 compute-0 sudo[159712]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:37 compute-0 sudo[159790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djlopnuztrwroxgpltcddftfmpyscsem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713137.2100618-1228-39851116223355/AnsiballZ_file.py'
Jan 06 15:25:37 compute-0 sudo[159790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:38 compute-0 python3.9[159792]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:38 compute-0 sudo[159790]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:38 compute-0 sudo[159942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxquvhztkjxwoailcszrinqbkmaaxuxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713138.3562505-1240-183221437720325/AnsiballZ_stat.py'
Jan 06 15:25:38 compute-0 sudo[159942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:39 compute-0 python3.9[159944]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:25:39 compute-0 sudo[159942]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:39 compute-0 sudo[160067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxtnatjpaxkzlqkosfovetaqdrvhwmkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713138.3562505-1240-183221437720325/AnsiballZ_copy.py'
Jan 06 15:25:39 compute-0 sudo[160067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:39 compute-0 python3.9[160069]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767713138.3562505-1240-183221437720325/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:39 compute-0 sudo[160067]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:40 compute-0 sudo[160219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swquvpusnammrltxkccinjvjltjrozmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713139.8764083-1255-166036197080519/AnsiballZ_file.py'
Jan 06 15:25:40 compute-0 sudo[160219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:40 compute-0 python3.9[160221]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:40 compute-0 sudo[160219]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:40 compute-0 sudo[160371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-susngnvjefsacekrmbiqelcziguzfpja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713140.5858393-1263-153001805869134/AnsiballZ_command.py'
Jan 06 15:25:40 compute-0 sudo[160371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:41 compute-0 python3.9[160373]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:25:41 compute-0 sudo[160371]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:41 compute-0 sudo[160526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iypwjjufihzpuqoipnqgfctqqfabpvck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713141.303298-1271-123780866393676/AnsiballZ_blockinfile.py'
Jan 06 15:25:41 compute-0 sudo[160526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:41 compute-0 python3.9[160528]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:41 compute-0 sudo[160526]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:42 compute-0 sudo[160678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvzcutqwcapgfsdrwdlapeuarevinbar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713142.226635-1280-146836310811752/AnsiballZ_command.py'
Jan 06 15:25:42 compute-0 sudo[160678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:42 compute-0 python3.9[160680]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:25:42 compute-0 sudo[160678]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:43 compute-0 sudo[160831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvsqlubbmcganpgnizckealybydddkhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713143.2240977-1288-142574190209512/AnsiballZ_stat.py'
Jan 06 15:25:43 compute-0 sudo[160831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:43 compute-0 python3.9[160833]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:25:43 compute-0 sudo[160831]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:44 compute-0 sudo[160985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnvxzpegieitxcnhagxtugcysdhbnlom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713143.9419954-1296-237247099760033/AnsiballZ_command.py'
Jan 06 15:25:44 compute-0 sudo[160985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:44 compute-0 python3.9[160987]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:25:44 compute-0 sudo[160985]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:44 compute-0 sudo[161140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzjycbopexbnbsklajoudchxynizbcjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713144.659816-1304-145521360485393/AnsiballZ_file.py'
Jan 06 15:25:44 compute-0 sudo[161140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:45 compute-0 python3.9[161142]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:45 compute-0 sudo[161140]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:45 compute-0 sudo[161292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpiaankwfnghfnycwkvkhqznypyocitg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713145.4204304-1312-277449791365/AnsiballZ_stat.py'
Jan 06 15:25:45 compute-0 sudo[161292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:46 compute-0 python3.9[161294]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:25:46 compute-0 sudo[161292]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:46 compute-0 sudo[161415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsbhewntqibnyiabkcufvcgviwzuzrog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713145.4204304-1312-277449791365/AnsiballZ_copy.py'
Jan 06 15:25:46 compute-0 sudo[161415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:46 compute-0 podman[161417]: 2026-01-06 15:25:46.646932408 +0000 UTC m=+0.162991408 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 06 15:25:46 compute-0 python3.9[161418]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713145.4204304-1312-277449791365/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:46 compute-0 sudo[161415]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:47 compute-0 sudo[161594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llpemsfytmfljilqkkkingssnpeymgtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713146.8826327-1327-94281088489295/AnsiballZ_stat.py'
Jan 06 15:25:47 compute-0 sudo[161594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:47 compute-0 python3.9[161596]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:25:47 compute-0 sudo[161594]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:47 compute-0 sudo[161717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txxqraphurcqmjmgyzhgsbwhpkovfxdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713146.8826327-1327-94281088489295/AnsiballZ_copy.py'
Jan 06 15:25:47 compute-0 sudo[161717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:47 compute-0 python3.9[161719]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713146.8826327-1327-94281088489295/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:47 compute-0 sudo[161717]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:48 compute-0 sudo[161869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kawxhhqqtvjvejprhcgumzlaavfbzupz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713148.1650198-1342-124678990489475/AnsiballZ_stat.py'
Jan 06 15:25:48 compute-0 sudo[161869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:48 compute-0 python3.9[161871]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:25:48 compute-0 sudo[161869]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:49 compute-0 sudo[161992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhxnogfvbadelcqedirwpgibbbylgcws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713148.1650198-1342-124678990489475/AnsiballZ_copy.py'
Jan 06 15:25:49 compute-0 sudo[161992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:49 compute-0 python3.9[161994]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713148.1650198-1342-124678990489475/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:25:49 compute-0 sudo[161992]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:49 compute-0 sudo[162144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hecwubwlgeqcoojjuunvxyrmsszqczhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713149.5142303-1357-98339354835194/AnsiballZ_systemd.py'
Jan 06 15:25:49 compute-0 sudo[162144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:50 compute-0 python3.9[162146]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:25:50 compute-0 systemd[1]: Reloading.
Jan 06 15:25:50 compute-0 systemd-rc-local-generator[162174]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:25:50 compute-0 systemd-sysv-generator[162177]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:25:50 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 06 15:25:50 compute-0 sudo[162144]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:51 compute-0 sudo[162336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqeiyipkxqvaneulfazqqzswygkwdubr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713150.7461216-1365-78819120438719/AnsiballZ_systemd.py'
Jan 06 15:25:51 compute-0 sudo[162336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:25:51 compute-0 python3.9[162338]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 06 15:25:51 compute-0 systemd[1]: Reloading.
Jan 06 15:25:51 compute-0 systemd-rc-local-generator[162364]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:25:51 compute-0 systemd-sysv-generator[162369]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:25:51 compute-0 systemd[1]: Reloading.
Jan 06 15:25:51 compute-0 systemd-sysv-generator[162404]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:25:51 compute-0 systemd-rc-local-generator[162401]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:25:51 compute-0 sudo[162336]: pam_unix(sudo:session): session closed for user root
Jan 06 15:25:52 compute-0 sshd-session[107843]: Connection closed by 192.168.122.30 port 50164
Jan 06 15:25:52 compute-0 sshd-session[107840]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:25:52 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Jan 06 15:25:52 compute-0 systemd[1]: session-23.scope: Consumed 3min 55.963s CPU time.
Jan 06 15:25:52 compute-0 systemd-logind[791]: Session 23 logged out. Waiting for processes to exit.
Jan 06 15:25:52 compute-0 systemd-logind[791]: Removed session 23.
Jan 06 15:25:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:25:53.661 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:25:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:25:53.664 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:25:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:25:53.664 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:25:57 compute-0 podman[162435]: 2026-01-06 15:25:57.787840087 +0000 UTC m=+0.053768271 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 06 15:25:58 compute-0 sshd-session[162456]: Accepted publickey for zuul from 192.168.122.30 port 40688 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 15:25:58 compute-0 systemd-logind[791]: New session 24 of user zuul.
Jan 06 15:25:58 compute-0 systemd[1]: Started Session 24 of User zuul.
Jan 06 15:25:58 compute-0 sshd-session[162456]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:25:59 compute-0 python3.9[162609]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:26:00 compute-0 python3.9[162763]: ansible-ansible.builtin.service_facts Invoked
Jan 06 15:26:00 compute-0 network[162780]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 06 15:26:00 compute-0 network[162781]: 'network-scripts' will be removed from distribution in near future.
Jan 06 15:26:00 compute-0 network[162782]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 06 15:26:06 compute-0 sudo[163051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voyzltzkumeqectfkozbwfrdxvptkszk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713166.0358534-42-113174938959590/AnsiballZ_setup.py'
Jan 06 15:26:06 compute-0 sudo[163051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:06 compute-0 python3.9[163053]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 06 15:26:06 compute-0 sudo[163051]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:07 compute-0 sudo[163135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkqkzipvwgvfbyyjhajmisqvtgehlfix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713166.0358534-42-113174938959590/AnsiballZ_dnf.py'
Jan 06 15:26:07 compute-0 sudo[163135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:07 compute-0 python3.9[163137]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 06 15:26:12 compute-0 sudo[163135]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:13 compute-0 sudo[163288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhwazxrqxhvtimspvldqowuasjduoyvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713173.181479-54-129208677771314/AnsiballZ_stat.py'
Jan 06 15:26:13 compute-0 sudo[163288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:14 compute-0 python3.9[163290]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:26:14 compute-0 sudo[163288]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:14 compute-0 sudo[163440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyxafxqustzpxjjhxrqhnqwfmszqzqbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713174.3105662-64-202879926755091/AnsiballZ_command.py'
Jan 06 15:26:14 compute-0 sudo[163440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:14 compute-0 python3.9[163442]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:26:15 compute-0 sudo[163440]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:15 compute-0 sudo[163593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acudainztenutwsnynapglwmsxfnserv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713175.3224497-74-252351756730514/AnsiballZ_stat.py'
Jan 06 15:26:15 compute-0 sudo[163593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:15 compute-0 python3.9[163595]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:26:15 compute-0 sudo[163593]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:16 compute-0 sudo[163745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjfvnpjitvdsmxpfsjbuuxjysmkslaho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713176.042328-82-196672562235538/AnsiballZ_command.py'
Jan 06 15:26:16 compute-0 sudo[163745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:16 compute-0 python3.9[163747]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:26:16 compute-0 sudo[163745]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:16 compute-0 podman[163773]: 2026-01-06 15:26:16.906717146 +0000 UTC m=+0.162425029 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 06 15:26:17 compute-0 sudo[163922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lznsasolzphzmvpxvwscljpltcrmxypv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713176.8132896-90-125921675656165/AnsiballZ_stat.py'
Jan 06 15:26:17 compute-0 sudo[163922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:17 compute-0 python3.9[163924]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:26:17 compute-0 sudo[163922]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:17 compute-0 sudo[164045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtpbbdqcswxqhojifvjtvbtdlxthqntq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713176.8132896-90-125921675656165/AnsiballZ_copy.py'
Jan 06 15:26:17 compute-0 sudo[164045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:18 compute-0 python3.9[164047]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713176.8132896-90-125921675656165/.source.iscsi _original_basename=.2e4up79w follow=False checksum=8f9c4b38c36410d659ceb84f1fbffeb5c7a08804 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:26:18 compute-0 sudo[164045]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:19 compute-0 sudo[164197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmyoryspohzpblezemjolpqzclhjavxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713178.5689895-105-189075903554786/AnsiballZ_file.py'
Jan 06 15:26:19 compute-0 sudo[164197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:19 compute-0 python3.9[164199]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:26:19 compute-0 sudo[164197]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:20 compute-0 sudo[164349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wioauocqzjiqvdhxtokdrbmgkodwydac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713179.5384498-113-119820495697669/AnsiballZ_lineinfile.py'
Jan 06 15:26:20 compute-0 sudo[164349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:20 compute-0 python3.9[164351]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:26:20 compute-0 sudo[164349]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:21 compute-0 sudo[164501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hscmrjoxepbqfijhixgjyvrgtkpkobsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713180.5287929-122-154877608620010/AnsiballZ_systemd_service.py'
Jan 06 15:26:21 compute-0 sudo[164501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:21 compute-0 python3.9[164503]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:26:21 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 06 15:26:21 compute-0 sudo[164501]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:22 compute-0 sudo[164657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjqanjlfeggleqwdqwniticfapmipdys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713181.8115575-130-170091549865236/AnsiballZ_systemd_service.py'
Jan 06 15:26:22 compute-0 sudo[164657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:22 compute-0 python3.9[164659]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:26:22 compute-0 systemd[1]: Reloading.
Jan 06 15:26:22 compute-0 systemd-rc-local-generator[164687]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:26:22 compute-0 systemd-sysv-generator[164692]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:26:22 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 06 15:26:22 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 06 15:26:22 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 06 15:26:22 compute-0 systemd[1]: Started Open-iSCSI.
Jan 06 15:26:22 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 06 15:26:22 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 06 15:26:23 compute-0 sudo[164657]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:23 compute-0 python3.9[164857]: ansible-ansible.builtin.service_facts Invoked
Jan 06 15:26:24 compute-0 network[164874]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 06 15:26:24 compute-0 network[164875]: 'network-scripts' will be removed from distribution in near future.
Jan 06 15:26:24 compute-0 network[164876]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 06 15:26:28 compute-0 podman[165119]: 2026-01-06 15:26:28.464103325 +0000 UTC m=+0.069358386 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 06 15:26:28 compute-0 sudo[165163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dflrlypcxyemuydppjrfchgxrdoqqntj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713188.1204247-153-68944703261328/AnsiballZ_dnf.py'
Jan 06 15:26:28 compute-0 sudo[165163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:28 compute-0 python3.9[165167]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 06 15:26:31 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 06 15:26:31 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 06 15:26:31 compute-0 systemd[1]: Reloading.
Jan 06 15:26:31 compute-0 systemd-sysv-generator[165216]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:26:31 compute-0 systemd-rc-local-generator[165213]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:26:31 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 06 15:26:31 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 06 15:26:31 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 06 15:26:31 compute-0 systemd[1]: run-r0dce026cbe084e9dac04cd7976ddbedc.service: Deactivated successfully.
Jan 06 15:26:32 compute-0 sudo[165163]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:32 compute-0 sudo[165481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvbcbsaqlhpadiodhgulwanvpwgrtfdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713192.5032434-162-8466206087464/AnsiballZ_file.py'
Jan 06 15:26:32 compute-0 sudo[165481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:32 compute-0 python3.9[165483]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 06 15:26:33 compute-0 sudo[165481]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:33 compute-0 sudo[165633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llogwvumeoajaznngaazlznuynlywzjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713193.1936295-170-13954885556800/AnsiballZ_modprobe.py'
Jan 06 15:26:33 compute-0 sudo[165633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:33 compute-0 python3.9[165635]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 06 15:26:33 compute-0 sudo[165633]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:34 compute-0 sudo[165789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vayulvvfmyydtmjdafncusgxtolryrre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713194.046604-178-73172323560615/AnsiballZ_stat.py'
Jan 06 15:26:34 compute-0 sudo[165789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:34 compute-0 python3.9[165791]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:26:34 compute-0 sudo[165789]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:34 compute-0 sudo[165912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sodwvzhqmuwandyqavekbjtiykthyter ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713194.046604-178-73172323560615/AnsiballZ_copy.py'
Jan 06 15:26:34 compute-0 sudo[165912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:35 compute-0 python3.9[165914]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713194.046604-178-73172323560615/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:26:35 compute-0 sudo[165912]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:35 compute-0 sudo[166064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqoltyckqygyikhjrxvsaaffquewzout ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713195.4100711-194-135488625998336/AnsiballZ_lineinfile.py'
Jan 06 15:26:35 compute-0 sudo[166064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:35 compute-0 python3.9[166066]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:26:35 compute-0 sudo[166064]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:37 compute-0 sudo[166216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwtjmdqspawtinhwmmsnvlqkhbdojscc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713196.1358776-202-15263558778403/AnsiballZ_systemd.py'
Jan 06 15:26:37 compute-0 sudo[166216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:37 compute-0 python3.9[166218]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 15:26:37 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 06 15:26:37 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 06 15:26:37 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 06 15:26:37 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 06 15:26:37 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 06 15:26:37 compute-0 sudo[166216]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:37 compute-0 sudo[166372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxonpirfqutrpmgxwkoqkimiulxjsjub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713197.6098938-210-34197444870848/AnsiballZ_command.py'
Jan 06 15:26:37 compute-0 sudo[166372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:38 compute-0 python3.9[166374]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:26:38 compute-0 sudo[166372]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:38 compute-0 sudo[166525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kufwjlqscywwvibsuktejjqaxyoykhby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713198.4276135-220-134804533520605/AnsiballZ_stat.py'
Jan 06 15:26:38 compute-0 sudo[166525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:38 compute-0 python3.9[166527]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:26:38 compute-0 sudo[166525]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:39 compute-0 sudo[166677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frcpfdxsvtxmyxvxthklzcnrgdcejbzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713199.1073687-229-281370261110165/AnsiballZ_stat.py'
Jan 06 15:26:39 compute-0 sudo[166677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:39 compute-0 python3.9[166679]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:26:39 compute-0 sudo[166677]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:40 compute-0 sudo[166800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxlxrohwzcpxiinhntglshwovrktgrzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713199.1073687-229-281370261110165/AnsiballZ_copy.py'
Jan 06 15:26:40 compute-0 sudo[166800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:40 compute-0 python3.9[166802]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713199.1073687-229-281370261110165/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:26:40 compute-0 sudo[166800]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:40 compute-0 sudo[166952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izgzjkoyojbidakgxgiibabykxdsejgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713200.3894804-244-33997025665241/AnsiballZ_command.py'
Jan 06 15:26:40 compute-0 sudo[166952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:40 compute-0 python3.9[166954]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:26:40 compute-0 sudo[166952]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:41 compute-0 sudo[167105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aggspkgdcrebrrnfefyitiftttnbgehi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713201.0977075-252-138177107952560/AnsiballZ_lineinfile.py'
Jan 06 15:26:41 compute-0 sudo[167105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:41 compute-0 python3.9[167107]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:26:41 compute-0 sudo[167105]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:42 compute-0 sudo[167257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrccgeeauuhldktmasczwjefaqinwjuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713201.9973712-260-127705457932658/AnsiballZ_replace.py'
Jan 06 15:26:42 compute-0 sudo[167257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:42 compute-0 python3.9[167259]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:26:42 compute-0 sudo[167257]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:43 compute-0 sudo[167409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drkhhwaeycmyrlilndnnexyatdzhtvqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713202.8920684-268-275142398632110/AnsiballZ_replace.py'
Jan 06 15:26:43 compute-0 sudo[167409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:43 compute-0 python3.9[167411]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:26:43 compute-0 sudo[167409]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:43 compute-0 sudo[167561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuflxpmebyfxowmerbklrlguztoodbzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713203.7098248-277-102474265921657/AnsiballZ_lineinfile.py'
Jan 06 15:26:43 compute-0 sudo[167561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:44 compute-0 python3.9[167563]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:26:44 compute-0 sudo[167561]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:44 compute-0 sudo[167713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iufjzpagavenbozgnabfhalynxanijfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713204.3089304-277-136685159158847/AnsiballZ_lineinfile.py'
Jan 06 15:26:44 compute-0 sudo[167713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:44 compute-0 python3.9[167715]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:26:44 compute-0 sudo[167713]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:45 compute-0 sudo[167865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubujoorapxnfbsvhzgsdiuaosnbfbwil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713205.0315616-277-96674204157447/AnsiballZ_lineinfile.py'
Jan 06 15:26:45 compute-0 sudo[167865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:45 compute-0 python3.9[167867]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:26:45 compute-0 sudo[167865]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:46 compute-0 sudo[168017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqwtmvbpwvsljygvzubipuljpztlvrod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713205.8288372-277-141647339486334/AnsiballZ_lineinfile.py'
Jan 06 15:26:46 compute-0 sudo[168017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:46 compute-0 python3.9[168019]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:26:46 compute-0 sudo[168017]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:46 compute-0 sudo[168169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngxqnrahzzopzlrtczxseerfckarqyjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713206.6324544-306-31957923426338/AnsiballZ_stat.py'
Jan 06 15:26:46 compute-0 sudo[168169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:47 compute-0 podman[168171]: 2026-01-06 15:26:47.131286175 +0000 UTC m=+0.113792721 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 06 15:26:47 compute-0 python3.9[168172]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:26:47 compute-0 sudo[168169]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:47 compute-0 sudo[168349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jobnymfgafaolboyhlatfuukgpdgykll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713207.4306667-314-109543228250578/AnsiballZ_command.py'
Jan 06 15:26:47 compute-0 sudo[168349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:47 compute-0 python3.9[168351]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:26:47 compute-0 sudo[168349]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:48 compute-0 sudo[168502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qofppsfdghdmjzhaymtihqpnrxjpumin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713208.203239-323-70969307755836/AnsiballZ_systemd_service.py'
Jan 06 15:26:48 compute-0 sudo[168502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:48 compute-0 python3.9[168504]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:26:48 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 06 15:26:48 compute-0 sudo[168502]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:49 compute-0 sudo[168658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efiqsuicterwlzqjsdtbdhlezryuidgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713209.1955833-331-109923476322813/AnsiballZ_systemd_service.py'
Jan 06 15:26:49 compute-0 sudo[168658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:49 compute-0 python3.9[168660]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:26:49 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 06 15:26:49 compute-0 udevadm[168665]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 06 15:26:49 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 06 15:26:49 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 06 15:26:50 compute-0 multipathd[168668]: --------start up--------
Jan 06 15:26:50 compute-0 multipathd[168668]: read /etc/multipath.conf
Jan 06 15:26:50 compute-0 multipathd[168668]: path checkers start up
Jan 06 15:26:50 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 06 15:26:50 compute-0 sudo[168658]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:50 compute-0 sudo[168825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcvkshovmlbhkclfkzdpcfzycfrzoqie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713210.446457-343-175816577646840/AnsiballZ_file.py'
Jan 06 15:26:50 compute-0 sudo[168825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:50 compute-0 python3.9[168827]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 06 15:26:50 compute-0 sudo[168825]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:51 compute-0 sudo[168977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avwutlfogkpwujnqarclsevdfrquenvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713211.1604598-351-50194564398772/AnsiballZ_modprobe.py'
Jan 06 15:26:51 compute-0 sudo[168977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:51 compute-0 python3.9[168979]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 06 15:26:51 compute-0 kernel: Key type psk registered
Jan 06 15:26:51 compute-0 sudo[168977]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:52 compute-0 sudo[169139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqeswvuooviaxsvacmlqzleldimcnewk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713211.8828924-359-124227528078164/AnsiballZ_stat.py'
Jan 06 15:26:52 compute-0 sudo[169139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:52 compute-0 python3.9[169141]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:26:52 compute-0 sudo[169139]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:52 compute-0 sudo[169262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttqjqwzbewfjeurubvpapviccivdfxik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713211.8828924-359-124227528078164/AnsiballZ_copy.py'
Jan 06 15:26:52 compute-0 sudo[169262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:52 compute-0 python3.9[169264]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713211.8828924-359-124227528078164/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:26:53 compute-0 sudo[169262]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:53 compute-0 sudo[169414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qilhpztkrboplaozwimhrvusfeehuwaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713213.249154-375-138217630343068/AnsiballZ_lineinfile.py'
Jan 06 15:26:53 compute-0 sudo[169414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:26:53.663 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:26:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:26:53.666 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:26:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:26:53.667 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:26:53 compute-0 python3.9[169416]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:26:53 compute-0 sudo[169414]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:54 compute-0 sudo[169566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpmkjuczfqvthengfhquybexrodyntic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713213.9965558-383-179580740786896/AnsiballZ_systemd.py'
Jan 06 15:26:54 compute-0 sudo[169566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:54 compute-0 python3.9[169568]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 15:26:54 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 06 15:26:54 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 06 15:26:54 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 06 15:26:54 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 06 15:26:54 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 06 15:26:54 compute-0 sudo[169566]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:55 compute-0 sudo[169722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzkuctpvekdudyvzvjwgxfvavxbjxjcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713214.9950256-391-229132358080166/AnsiballZ_dnf.py'
Jan 06 15:26:55 compute-0 sudo[169722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:26:55 compute-0 python3.9[169724]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 06 15:26:57 compute-0 systemd[1]: Reloading.
Jan 06 15:26:57 compute-0 systemd-rc-local-generator[169757]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:26:57 compute-0 systemd-sysv-generator[169760]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:26:57 compute-0 systemd[1]: Reloading.
Jan 06 15:26:57 compute-0 systemd-sysv-generator[169795]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:26:57 compute-0 systemd-rc-local-generator[169791]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:26:58 compute-0 systemd-logind[791]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 06 15:26:58 compute-0 systemd-logind[791]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 06 15:26:58 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 06 15:26:58 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 06 15:26:58 compute-0 podman[169855]: 2026-01-06 15:26:58.571520468 +0000 UTC m=+0.065406265 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:26:58 compute-0 systemd[1]: Reloading.
Jan 06 15:26:58 compute-0 systemd-rc-local-generator[169908]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:26:58 compute-0 systemd-sysv-generator[169912]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:26:58 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 06 15:26:59 compute-0 sudo[169722]: pam_unix(sudo:session): session closed for user root
Jan 06 15:26:59 compute-0 sudo[171158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfzonxtncfmupqyfmqnzpualqlmybrod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713219.5928993-399-239235078754848/AnsiballZ_systemd_service.py'
Jan 06 15:26:59 compute-0 sudo[171158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:00 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 06 15:27:00 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 06 15:27:00 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.623s CPU time.
Jan 06 15:27:00 compute-0 systemd[1]: run-rcd9734b3de8b4dcc840c1613f9260036.service: Deactivated successfully.
Jan 06 15:27:00 compute-0 python3.9[171189]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 15:27:00 compute-0 iscsid[164699]: iscsid shutting down.
Jan 06 15:27:00 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 06 15:27:00 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 06 15:27:00 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 06 15:27:00 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 06 15:27:00 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 06 15:27:00 compute-0 systemd[1]: Started Open-iSCSI.
Jan 06 15:27:00 compute-0 sudo[171158]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:00 compute-0 sudo[171364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqokpfqeeiucpopzmtybqtadwfwgjfkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713220.466898-407-258406423380880/AnsiballZ_systemd_service.py'
Jan 06 15:27:00 compute-0 sudo[171364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:01 compute-0 python3.9[171366]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 15:27:01 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 06 15:27:01 compute-0 multipathd[168668]: exit (signal)
Jan 06 15:27:01 compute-0 multipathd[168668]: --------shut down-------
Jan 06 15:27:01 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 06 15:27:01 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 06 15:27:01 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 06 15:27:01 compute-0 multipathd[171373]: --------start up--------
Jan 06 15:27:01 compute-0 multipathd[171373]: read /etc/multipath.conf
Jan 06 15:27:01 compute-0 multipathd[171373]: path checkers start up
Jan 06 15:27:01 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 06 15:27:01 compute-0 sudo[171364]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:02 compute-0 python3.9[171530]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:27:02 compute-0 sudo[171684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrxmchhowqexgcvslmyliumbbsjcndio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713222.5910292-425-71302955694271/AnsiballZ_file.py'
Jan 06 15:27:02 compute-0 sudo[171684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:03 compute-0 python3.9[171686]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:27:03 compute-0 sudo[171684]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:04 compute-0 sudo[171836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klijmprispueaoqrsvbklxqayfzrgmlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713223.6990428-436-256248108790133/AnsiballZ_systemd_service.py'
Jan 06 15:27:04 compute-0 sudo[171836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:04 compute-0 python3.9[171838]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 06 15:27:04 compute-0 systemd[1]: Reloading.
Jan 06 15:27:04 compute-0 systemd-sysv-generator[171871]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:27:04 compute-0 systemd-rc-local-generator[171867]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:27:04 compute-0 sudo[171836]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:05 compute-0 python3.9[172024]: ansible-ansible.builtin.service_facts Invoked
Jan 06 15:27:05 compute-0 network[172041]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 06 15:27:05 compute-0 network[172042]: 'network-scripts' will be removed from distribution in near future.
Jan 06 15:27:05 compute-0 network[172043]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 06 15:27:09 compute-0 sudo[172313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiqawbekcflddvzgpqscohnrejydkamf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713229.0128694-455-83260707434572/AnsiballZ_systemd_service.py'
Jan 06 15:27:09 compute-0 sudo[172313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:09 compute-0 sshd-session[172316]: banner exchange: Connection from 3.134.148.59 port 60368: invalid format
Jan 06 15:27:09 compute-0 python3.9[172315]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:27:09 compute-0 sudo[172313]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:10 compute-0 sudo[172467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpblarwhjizioqcrjhbpiejugfccmsxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713229.9126635-455-246254583448592/AnsiballZ_systemd_service.py'
Jan 06 15:27:10 compute-0 sudo[172467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:10 compute-0 python3.9[172469]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:27:10 compute-0 sudo[172467]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:11 compute-0 sudo[172620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcbcmfvzngbdjtxefwqwigycytpffxcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713230.7391107-455-170506035708011/AnsiballZ_systemd_service.py'
Jan 06 15:27:11 compute-0 sudo[172620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:11 compute-0 python3.9[172622]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:27:11 compute-0 sudo[172620]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:11 compute-0 sudo[172773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prklavuoqzzlbscafjfzstazdzjcqnsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713231.5372605-455-63225556184936/AnsiballZ_systemd_service.py'
Jan 06 15:27:11 compute-0 sudo[172773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:12 compute-0 python3.9[172775]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:27:12 compute-0 sudo[172773]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:12 compute-0 sudo[172926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdgxroeputymnqhpscjgcaarttawttou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713232.4034982-455-125855810658179/AnsiballZ_systemd_service.py'
Jan 06 15:27:12 compute-0 sudo[172926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:13 compute-0 python3.9[172928]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:27:13 compute-0 sudo[172926]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:13 compute-0 sudo[173079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxyebroxkmennsroctitfzxcuwdkpucf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713233.4389699-455-239769420185383/AnsiballZ_systemd_service.py'
Jan 06 15:27:13 compute-0 sudo[173079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:14 compute-0 python3.9[173081]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:27:14 compute-0 sudo[173079]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:14 compute-0 sudo[173232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnfgleoywwkxkvzjfdcqoqikyfshmqkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713234.239512-455-216696954633283/AnsiballZ_systemd_service.py'
Jan 06 15:27:14 compute-0 sudo[173232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:14 compute-0 python3.9[173234]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:27:14 compute-0 sudo[173232]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:15 compute-0 sudo[173385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqxnezvbxyvfocpcfdfmhvehtqgrqnqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713235.094231-455-28414163797469/AnsiballZ_systemd_service.py'
Jan 06 15:27:15 compute-0 sudo[173385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:15 compute-0 python3.9[173387]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:27:15 compute-0 sudo[173385]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:16 compute-0 sudo[173538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehucdqqcvmwugrlsnpgkozirdwjutfzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713236.01966-514-173422806929551/AnsiballZ_file.py'
Jan 06 15:27:16 compute-0 sudo[173538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:16 compute-0 python3.9[173540]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:27:16 compute-0 sudo[173538]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:17 compute-0 sudo[173690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mulvqcfnsvquayqyjsjbjolggmcplxdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713236.7147925-514-179715842321194/AnsiballZ_file.py'
Jan 06 15:27:17 compute-0 sudo[173690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:17 compute-0 python3.9[173692]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:27:17 compute-0 sudo[173690]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:17 compute-0 sudo[173848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isdlebydnhhwfveebjbwktaciilujgjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713237.410651-514-276826589562497/AnsiballZ_file.py'
Jan 06 15:27:17 compute-0 sudo[173848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:17 compute-0 podman[173815]: 2026-01-06 15:27:17.85560834 +0000 UTC m=+0.104006033 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible)
Jan 06 15:27:17 compute-0 python3.9[173855]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:27:17 compute-0 sudo[173848]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:18 compute-0 sudo[174020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzuiwsrxacqtjgasftqajvyarckfmyfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713238.139131-514-177638619290795/AnsiballZ_file.py'
Jan 06 15:27:18 compute-0 sudo[174020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:18 compute-0 python3.9[174022]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:27:18 compute-0 sudo[174020]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:19 compute-0 sudo[174172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spdbuskzoiysfqlbupajxvpwxvskmejb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713238.8052268-514-224099705602555/AnsiballZ_file.py'
Jan 06 15:27:19 compute-0 sudo[174172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:19 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 06 15:27:19 compute-0 python3.9[174174]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:27:19 compute-0 sudo[174172]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:19 compute-0 sudo[174325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fenqvrdbtidryyvamlnehqmbjlrfslsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713239.5595157-514-278626821920373/AnsiballZ_file.py'
Jan 06 15:27:19 compute-0 sudo[174325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:20 compute-0 python3.9[174327]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:27:20 compute-0 sudo[174325]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:20 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 06 15:27:20 compute-0 sudo[174478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgwezzvfdzfhcibcxqkpzpsvyyttbzpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713240.397309-514-135963182701951/AnsiballZ_file.py'
Jan 06 15:27:20 compute-0 sudo[174478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:20 compute-0 python3.9[174480]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:27:20 compute-0 sudo[174478]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:21 compute-0 sudo[174630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwungsnggaamknkryunpcurdmcqlgmmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713240.9957223-514-86352850421697/AnsiballZ_file.py'
Jan 06 15:27:21 compute-0 sudo[174630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:21 compute-0 python3.9[174632]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:27:21 compute-0 sudo[174630]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:21 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 06 15:27:22 compute-0 sudo[174783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rseurhxdopyibnyxuynktdaqihhnkhjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713241.7487586-571-183276178007345/AnsiballZ_file.py'
Jan 06 15:27:22 compute-0 sudo[174783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:22 compute-0 python3.9[174785]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:27:22 compute-0 sudo[174783]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:22 compute-0 sudo[174935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oetfmmnroljmwftembrzhsaxhexizrpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713242.4620693-571-205003105744177/AnsiballZ_file.py'
Jan 06 15:27:22 compute-0 sudo[174935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:22 compute-0 python3.9[174937]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:27:22 compute-0 sudo[174935]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:23 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 06 15:27:23 compute-0 sudo[175088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqocqdaicvfrmbamuthdxyoqqtopfypp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713243.135373-571-253723558350051/AnsiballZ_file.py'
Jan 06 15:27:23 compute-0 sudo[175088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:23 compute-0 python3.9[175090]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:27:23 compute-0 sudo[175088]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:24 compute-0 sudo[175240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngozxgxstapqsamgoopkjziepvryuwqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713243.7815213-571-125532402892971/AnsiballZ_file.py'
Jan 06 15:27:24 compute-0 sudo[175240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:24 compute-0 python3.9[175242]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:27:24 compute-0 sudo[175240]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:24 compute-0 sshd-session[175343]: banner exchange: Connection from 3.134.148.59 port 50874: invalid format
Jan 06 15:27:24 compute-0 sudo[175393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dubknzwqbwtbonjqnceaqhnlkqyidmdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713244.434961-571-95419645016135/AnsiballZ_file.py'
Jan 06 15:27:24 compute-0 sudo[175393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:24 compute-0 sshd-session[175396]: banner exchange: Connection from 3.134.148.59 port 50890: invalid format
Jan 06 15:27:25 compute-0 python3.9[175395]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:27:25 compute-0 sudo[175393]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:25 compute-0 sudo[175546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opdblroxmidtnxheqcjlvdocycwktwgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713245.200464-571-57877340070635/AnsiballZ_file.py'
Jan 06 15:27:25 compute-0 sudo[175546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:25 compute-0 python3.9[175548]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:27:25 compute-0 sudo[175546]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:26 compute-0 sudo[175698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqnpjuviwvvcvbytlpjmbtjyxzbnizjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713245.8962984-571-52958739540565/AnsiballZ_file.py'
Jan 06 15:27:26 compute-0 sudo[175698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:26 compute-0 python3.9[175700]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:27:26 compute-0 sudo[175698]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:26 compute-0 sudo[175850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-karvfozzkibghebtxvubsopuipguidra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713246.6099753-571-164100959350334/AnsiballZ_file.py'
Jan 06 15:27:26 compute-0 sudo[175850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:27 compute-0 python3.9[175852]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:27:27 compute-0 sudo[175850]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:27 compute-0 sudo[176002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reeccttbcxgkvjawzewanpfztwzbiiny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713247.4452066-629-259412719295332/AnsiballZ_command.py'
Jan 06 15:27:27 compute-0 sudo[176002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:27 compute-0 python3.9[176004]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:27:28 compute-0 sudo[176002]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:28 compute-0 podman[176157]: 2026-01-06 15:27:28.816838974 +0000 UTC m=+0.081600102 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 06 15:27:28 compute-0 python3.9[176156]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 06 15:27:29 compute-0 sudo[176325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-senbxqvmerloikhmwfeehtpcfusyqfko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713249.1163595-647-184832615227534/AnsiballZ_systemd_service.py'
Jan 06 15:27:29 compute-0 sudo[176325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:29 compute-0 python3.9[176327]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 06 15:27:29 compute-0 systemd[1]: Reloading.
Jan 06 15:27:29 compute-0 systemd-sysv-generator[176353]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:27:29 compute-0 systemd-rc-local-generator[176346]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:27:30 compute-0 sudo[176325]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:30 compute-0 sudo[176512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rckocqxkubccbpjnyqdfzasgujeujxsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713250.2700186-655-146542633461951/AnsiballZ_command.py'
Jan 06 15:27:30 compute-0 sudo[176512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:30 compute-0 python3.9[176514]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:27:30 compute-0 sudo[176512]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:31 compute-0 sudo[176665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvatcjixtptcuqhzbveoffabxlszsgvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713250.9144661-655-225464694653518/AnsiballZ_command.py'
Jan 06 15:27:31 compute-0 sudo[176665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:31 compute-0 python3.9[176667]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:27:31 compute-0 sudo[176665]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:31 compute-0 sudo[176818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cquztxtuzlqedltysqojrqcoojkvfosq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713251.568148-655-194465257064639/AnsiballZ_command.py'
Jan 06 15:27:31 compute-0 sudo[176818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:32 compute-0 python3.9[176820]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:27:32 compute-0 sudo[176818]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:32 compute-0 sudo[176971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtmxcljsprkhyyesemhdeakuvqrialvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713252.2868204-655-12568208693458/AnsiballZ_command.py'
Jan 06 15:27:32 compute-0 sudo[176971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:32 compute-0 python3.9[176973]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:27:32 compute-0 sudo[176971]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:33 compute-0 sudo[177124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubhyvhdkhelokddtrojzrigxjybvncqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713252.9868605-655-87222078564651/AnsiballZ_command.py'
Jan 06 15:27:33 compute-0 sudo[177124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:33 compute-0 python3.9[177126]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:27:33 compute-0 sudo[177124]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:33 compute-0 sudo[177277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrjroikatoflqnyxxlulxkihbhalzfgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713253.6005938-655-21208810478581/AnsiballZ_command.py'
Jan 06 15:27:33 compute-0 sudo[177277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:34 compute-0 python3.9[177279]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:27:34 compute-0 sudo[177277]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:34 compute-0 sudo[177430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqbuqbwfcequuabbzpbpaxiorrjhacdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713254.2484524-655-49553062371299/AnsiballZ_command.py'
Jan 06 15:27:34 compute-0 sudo[177430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:34 compute-0 python3.9[177432]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:27:34 compute-0 sudo[177430]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:35 compute-0 sudo[177583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxiiqjmnsiyhsjcqroglpnemyvjhcztq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713254.9367685-655-98879237915556/AnsiballZ_command.py'
Jan 06 15:27:35 compute-0 sudo[177583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:35 compute-0 python3.9[177585]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:27:35 compute-0 sudo[177583]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:38 compute-0 sudo[177736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhapgrptyrhnhqugdnkcwppcwvyovcbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713257.8374596-734-193667693982109/AnsiballZ_file.py'
Jan 06 15:27:38 compute-0 sudo[177736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:38 compute-0 python3.9[177738]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:27:38 compute-0 sudo[177736]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:38 compute-0 sudo[177888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kneusbgbeejpdwnbhfczntderchrozjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713258.5614488-734-220150218805265/AnsiballZ_file.py'
Jan 06 15:27:38 compute-0 sudo[177888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:39 compute-0 python3.9[177890]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:27:39 compute-0 sudo[177888]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:39 compute-0 sudo[178040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opmsrpkskuielrdnpsqtmgroeadccggl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713259.2916975-734-88506419012275/AnsiballZ_file.py'
Jan 06 15:27:39 compute-0 sudo[178040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:39 compute-0 python3.9[178042]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:27:39 compute-0 sudo[178040]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:40 compute-0 sudo[178192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-essnzlzdlykmkqcyfdpuxiqfwbztblbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713259.9588604-756-271542528796946/AnsiballZ_file.py'
Jan 06 15:27:40 compute-0 sudo[178192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:40 compute-0 python3.9[178194]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:27:40 compute-0 sudo[178192]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:40 compute-0 sudo[178344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpgzhyzrciceeohmezoyfojntszrfncs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713260.656522-756-170944769051612/AnsiballZ_file.py'
Jan 06 15:27:40 compute-0 sudo[178344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:41 compute-0 python3.9[178346]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:27:41 compute-0 sudo[178344]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:41 compute-0 sudo[178496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpylehruethrziauixcqqoisnudolejp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713261.2900546-756-40554744234142/AnsiballZ_file.py'
Jan 06 15:27:41 compute-0 sudo[178496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:41 compute-0 python3.9[178498]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:27:41 compute-0 sudo[178496]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:42 compute-0 sudo[178648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uczivrebjspzqsaahlndtyitlxtuzmch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713261.9429374-756-12134360508143/AnsiballZ_file.py'
Jan 06 15:27:42 compute-0 sudo[178648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:42 compute-0 python3.9[178650]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:27:42 compute-0 sudo[178648]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:42 compute-0 sudo[178800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uckusdiakyiqanjsezecronqnfwsfmwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713262.5969326-756-153530292392763/AnsiballZ_file.py'
Jan 06 15:27:42 compute-0 sudo[178800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:43 compute-0 python3.9[178802]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:27:43 compute-0 sudo[178800]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:43 compute-0 sudo[178952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhkkjkwtewderlzqmwtpqxxivxhwyikq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713263.2897036-756-129030960647068/AnsiballZ_file.py'
Jan 06 15:27:43 compute-0 sudo[178952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:43 compute-0 python3.9[178954]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:27:43 compute-0 sudo[178952]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:44 compute-0 sudo[179104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssqtxvbynvnjxvxprmvvmchgmiibkshz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713263.9397278-756-199523048014571/AnsiballZ_file.py'
Jan 06 15:27:44 compute-0 sudo[179104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:44 compute-0 python3.9[179106]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:27:44 compute-0 sudo[179104]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:48 compute-0 podman[179183]: 2026-01-06 15:27:48.874804008 +0000 UTC m=+0.140427838 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:27:49 compute-0 sudo[179283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unucigdglhdpvukurckkydrfgqxungbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713268.5780213-925-155275874348005/AnsiballZ_getent.py'
Jan 06 15:27:49 compute-0 sudo[179283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:49 compute-0 python3.9[179285]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 06 15:27:49 compute-0 sudo[179283]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:50 compute-0 sudo[179436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coelrnqrpfjkntcwjwshpfgxhcmqapsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713269.4917-933-76808826437538/AnsiballZ_group.py'
Jan 06 15:27:50 compute-0 sudo[179436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:50 compute-0 python3.9[179438]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 06 15:27:50 compute-0 groupadd[179439]: group added to /etc/group: name=nova, GID=42436
Jan 06 15:27:50 compute-0 groupadd[179439]: group added to /etc/gshadow: name=nova
Jan 06 15:27:50 compute-0 groupadd[179439]: new group: name=nova, GID=42436
Jan 06 15:27:50 compute-0 sudo[179436]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:50 compute-0 sudo[179594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kixgcjfeppnawnevhnoioltlvtdhphiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713270.5381424-941-167485291252371/AnsiballZ_user.py'
Jan 06 15:27:50 compute-0 sudo[179594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:27:51 compute-0 python3.9[179596]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 06 15:27:51 compute-0 useradd[179598]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 06 15:27:51 compute-0 useradd[179598]: add 'nova' to group 'libvirt'
Jan 06 15:27:51 compute-0 useradd[179598]: add 'nova' to shadow group 'libvirt'
Jan 06 15:27:51 compute-0 sudo[179594]: pam_unix(sudo:session): session closed for user root
Jan 06 15:27:52 compute-0 sshd-session[179629]: Accepted publickey for zuul from 192.168.122.30 port 39230 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 15:27:52 compute-0 systemd-logind[791]: New session 25 of user zuul.
Jan 06 15:27:52 compute-0 systemd[1]: Started Session 25 of User zuul.
Jan 06 15:27:52 compute-0 sshd-session[179629]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:27:52 compute-0 sshd-session[179632]: Received disconnect from 192.168.122.30 port 39230:11: disconnected by user
Jan 06 15:27:52 compute-0 sshd-session[179632]: Disconnected from user zuul 192.168.122.30 port 39230
Jan 06 15:27:52 compute-0 sshd-session[179629]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:27:52 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Jan 06 15:27:52 compute-0 systemd-logind[791]: Session 25 logged out. Waiting for processes to exit.
Jan 06 15:27:52 compute-0 systemd-logind[791]: Removed session 25.
Jan 06 15:27:53 compute-0 python3.9[179782]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:27:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:27:53.664 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:27:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:27:53.666 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:27:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:27:53.666 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:27:53 compute-0 python3.9[179903]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767713272.6847115-966-199913866719518/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:27:54 compute-0 python3.9[180053]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:27:55 compute-0 python3.9[180129]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:27:55 compute-0 python3.9[180279]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:27:56 compute-0 python3.9[180400]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767713275.2263265-966-51517931524833/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:27:56 compute-0 python3.9[180550]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:27:57 compute-0 python3.9[180671]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767713276.4289982-966-172059971117841/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:27:58 compute-0 python3.9[180821]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:27:58 compute-0 python3.9[180942]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767713277.6546624-966-193350536492615/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:27:59 compute-0 podman[181066]: 2026-01-06 15:27:59.1808055 +0000 UTC m=+0.052602386 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:27:59 compute-0 python3.9[181109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:27:59 compute-0 python3.9[181232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767713278.8688226-966-152105921285802/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:28:00 compute-0 sudo[181382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgxipwinbavukdlcmcjxanadstytdfpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713280.1665084-1049-190096955720875/AnsiballZ_file.py'
Jan 06 15:28:00 compute-0 sudo[181382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:00 compute-0 python3.9[181384]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:28:00 compute-0 sudo[181382]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:01 compute-0 sudo[181534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoarerfwqnjtqsyawdifsevbqjabtrrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713280.9770646-1057-274995053587928/AnsiballZ_copy.py'
Jan 06 15:28:01 compute-0 sudo[181534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:01 compute-0 python3.9[181536]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:28:01 compute-0 sudo[181534]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:02 compute-0 sudo[181686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oumpeogkmrqmffyyzruaeeeowckspbmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713281.793003-1065-115904865122384/AnsiballZ_stat.py'
Jan 06 15:28:02 compute-0 sudo[181686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:02 compute-0 python3.9[181688]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:28:02 compute-0 sudo[181686]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:02 compute-0 sudo[181838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmjpwvftqebfvugsevjfvumhfaoekrwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713282.4870589-1073-254945293394659/AnsiballZ_stat.py'
Jan 06 15:28:02 compute-0 sudo[181838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:03 compute-0 python3.9[181840]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:28:03 compute-0 sudo[181838]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:03 compute-0 sudo[181961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evfjvvwbdjgcilfdwcsmypsflvqymzsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713282.4870589-1073-254945293394659/AnsiballZ_copy.py'
Jan 06 15:28:03 compute-0 sudo[181961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:03 compute-0 python3.9[181963]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1767713282.4870589-1073-254945293394659/.source _original_basename=.9gjsuiie follow=False checksum=fda116ce1a87ac648f6d0a23126de9caa8b17b05 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 06 15:28:03 compute-0 sudo[181961]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:04 compute-0 python3.9[182115]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:28:05 compute-0 python3.9[182267]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:28:05 compute-0 python3.9[182388]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767713284.7490551-1099-262818647912008/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:28:06 compute-0 python3.9[182538]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:28:07 compute-0 python3.9[182659]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767713286.0395765-1114-110688022890485/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:28:08 compute-0 sudo[182809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isleoellmhgukxlafdahzgwabkfiiyvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713287.5489104-1131-27143572811487/AnsiballZ_container_config_data.py'
Jan 06 15:28:08 compute-0 sudo[182809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:08 compute-0 python3.9[182811]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 06 15:28:08 compute-0 sudo[182809]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:09 compute-0 sudo[182961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrvcfcoldmgrmogrldwgkyujhmxgffyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713288.6170142-1142-27592922739704/AnsiballZ_container_config_hash.py'
Jan 06 15:28:09 compute-0 sudo[182961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:09 compute-0 python3.9[182963]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 06 15:28:09 compute-0 sudo[182961]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:10 compute-0 sudo[183113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzsefjaazbrhsebjxxktggmxbsaatlsl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767713289.7440996-1152-138407068183882/AnsiballZ_edpm_container_manage.py'
Jan 06 15:28:10 compute-0 sudo[183113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:10 compute-0 python3[183115]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 06 15:28:10 compute-0 podman[183152]: 2026-01-06 15:28:10.918389317 +0000 UTC m=+0.072635513 container create 2351265932d7792819f5b179d26e17c2ce46acad3a42f7cb724d51c8f28538d4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, container_name=nova_compute_init, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Jan 06 15:28:10 compute-0 podman[183152]: 2026-01-06 15:28:10.877586123 +0000 UTC m=+0.031832419 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 06 15:28:10 compute-0 python3[183115]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 06 15:28:11 compute-0 sudo[183113]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:11 compute-0 sudo[183339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhpufhaaopyqrxefdbqlwdyojahagvoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713291.2363796-1160-18598926159657/AnsiballZ_stat.py'
Jan 06 15:28:11 compute-0 sudo[183339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:11 compute-0 python3.9[183341]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:28:11 compute-0 sudo[183339]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:12 compute-0 sudo[183493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnddfkitnzyrmihazkkrjtzipnjtsmyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713292.2912333-1172-114501819407057/AnsiballZ_container_config_data.py'
Jan 06 15:28:12 compute-0 sudo[183493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:12 compute-0 python3.9[183495]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 06 15:28:12 compute-0 sudo[183493]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:13 compute-0 sudo[183645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sapvsvrvowleujhdvnrnkuidaheclshd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713293.1429446-1183-50662512691656/AnsiballZ_container_config_hash.py'
Jan 06 15:28:13 compute-0 sudo[183645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:13 compute-0 python3.9[183647]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 06 15:28:13 compute-0 sudo[183645]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:14 compute-0 sudo[183797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esrwienpxwobsenacemuzlhlumsgdtaz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767713293.9520416-1193-4710583754065/AnsiballZ_edpm_container_manage.py'
Jan 06 15:28:14 compute-0 sudo[183797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:14 compute-0 python3[183799]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 06 15:28:14 compute-0 podman[183836]: 2026-01-06 15:28:14.745081187 +0000 UTC m=+0.062105316 container create 55b8ec43af1fcd596145fa6a59e91743fafd2328d1bbf0a28ef963ea14cfff22 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:28:14 compute-0 podman[183836]: 2026-01-06 15:28:14.714265626 +0000 UTC m=+0.031289755 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 06 15:28:14 compute-0 python3[183799]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 06 15:28:14 compute-0 sudo[183797]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:15 compute-0 sudo[184024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtfodhmwthhhavosqtnfbjwmvuedolew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713295.1398387-1201-31537418061600/AnsiballZ_stat.py'
Jan 06 15:28:15 compute-0 sudo[184024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:15 compute-0 python3.9[184026]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:28:15 compute-0 sudo[184024]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:16 compute-0 sudo[184178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siwomeyofndlwxwavfevqsfmdgvttxza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713295.9940414-1210-29521941512352/AnsiballZ_file.py'
Jan 06 15:28:16 compute-0 sudo[184178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:16 compute-0 python3.9[184180]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:28:16 compute-0 sudo[184178]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:17 compute-0 sudo[184329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcfehuuksghwzltpehinukcqmepkwuqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713296.6235738-1210-230947421447146/AnsiballZ_copy.py'
Jan 06 15:28:17 compute-0 sudo[184329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:17 compute-0 python3.9[184331]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767713296.6235738-1210-230947421447146/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:28:17 compute-0 sudo[184329]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:17 compute-0 sudo[184405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmhndzncwxeqcnsgezrmgxsenstorinz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713296.6235738-1210-230947421447146/AnsiballZ_systemd.py'
Jan 06 15:28:17 compute-0 sudo[184405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:17 compute-0 python3.9[184407]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 06 15:28:17 compute-0 systemd[1]: Reloading.
Jan 06 15:28:18 compute-0 systemd-sysv-generator[184441]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:28:18 compute-0 systemd-rc-local-generator[184438]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:28:18 compute-0 sudo[184405]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:18 compute-0 sudo[184516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkuidmealeqgdywjfjvpeqtzllgrsusm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713296.6235738-1210-230947421447146/AnsiballZ_systemd.py'
Jan 06 15:28:18 compute-0 sudo[184516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:18 compute-0 python3.9[184518]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:28:18 compute-0 systemd[1]: Reloading.
Jan 06 15:28:18 compute-0 systemd-rc-local-generator[184548]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:28:18 compute-0 systemd-sysv-generator[184551]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:28:19 compute-0 systemd[1]: Starting nova_compute container...
Jan 06 15:28:19 compute-0 systemd[1]: Started libcrun container.
Jan 06 15:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2c8906b2185567afe49b9d0232f2e3e0cf58688bf5e5d87567635730a43211/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 06 15:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2c8906b2185567afe49b9d0232f2e3e0cf58688bf5e5d87567635730a43211/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 06 15:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2c8906b2185567afe49b9d0232f2e3e0cf58688bf5e5d87567635730a43211/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 06 15:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2c8906b2185567afe49b9d0232f2e3e0cf58688bf5e5d87567635730a43211/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 06 15:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2c8906b2185567afe49b9d0232f2e3e0cf58688bf5e5d87567635730a43211/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 06 15:28:19 compute-0 podman[184559]: 2026-01-06 15:28:19.280731631 +0000 UTC m=+0.117130774 container init 55b8ec43af1fcd596145fa6a59e91743fafd2328d1bbf0a28ef963ea14cfff22 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 06 15:28:19 compute-0 podman[184559]: 2026-01-06 15:28:19.287463648 +0000 UTC m=+0.123862751 container start 55b8ec43af1fcd596145fa6a59e91743fafd2328d1bbf0a28ef963ea14cfff22 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=nova_compute)
Jan 06 15:28:19 compute-0 nova_compute[184587]: + sudo -E kolla_set_configs
Jan 06 15:28:19 compute-0 podman[184559]: nova_compute
Jan 06 15:28:19 compute-0 podman[184557]: 2026-01-06 15:28:19.292280315 +0000 UTC m=+0.131283167 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 06 15:28:19 compute-0 systemd[1]: Started nova_compute container.
Jan 06 15:28:19 compute-0 sudo[184516]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Validating config file
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Copying service configuration files
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Deleting /etc/ceph
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Creating directory /etc/ceph
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Setting permission for /etc/ceph
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Writing out command to execute
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 06 15:28:19 compute-0 nova_compute[184587]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 06 15:28:19 compute-0 nova_compute[184587]: ++ cat /run_command
Jan 06 15:28:19 compute-0 nova_compute[184587]: + CMD=nova-compute
Jan 06 15:28:19 compute-0 nova_compute[184587]: + ARGS=
Jan 06 15:28:19 compute-0 nova_compute[184587]: + sudo kolla_copy_cacerts
Jan 06 15:28:19 compute-0 nova_compute[184587]: + [[ ! -n '' ]]
Jan 06 15:28:19 compute-0 nova_compute[184587]: + . kolla_extend_start
Jan 06 15:28:19 compute-0 nova_compute[184587]: + echo 'Running command: '\''nova-compute'\'''
Jan 06 15:28:19 compute-0 nova_compute[184587]: Running command: 'nova-compute'
Jan 06 15:28:19 compute-0 nova_compute[184587]: + umask 0022
Jan 06 15:28:19 compute-0 nova_compute[184587]: + exec nova-compute
Jan 06 15:28:20 compute-0 python3.9[184758]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:28:20 compute-0 python3.9[184909]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:28:21 compute-0 nova_compute[184587]: 2026-01-06 15:28:21.468 184600 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 06 15:28:21 compute-0 nova_compute[184587]: 2026-01-06 15:28:21.469 184600 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 06 15:28:21 compute-0 nova_compute[184587]: 2026-01-06 15:28:21.469 184600 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 06 15:28:21 compute-0 nova_compute[184587]: 2026-01-06 15:28:21.469 184600 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 06 15:28:21 compute-0 nova_compute[184587]: 2026-01-06 15:28:21.605 184600 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 15:28:21 compute-0 nova_compute[184587]: 2026-01-06 15:28:21.627 184600 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 15:28:21 compute-0 nova_compute[184587]: 2026-01-06 15:28:21.628 184600 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 06 15:28:21 compute-0 python3.9[185061]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.249 184600 INFO nova.virt.driver [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.389 184600 INFO nova.compute.provider_config [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.405 184600 DEBUG oslo_concurrency.lockutils [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.406 184600 DEBUG oslo_concurrency.lockutils [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.406 184600 DEBUG oslo_concurrency.lockutils [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.406 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.406 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.406 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.407 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.407 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.407 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.407 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.407 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.407 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.407 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.407 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.408 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.408 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.408 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.408 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.408 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.408 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.408 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.409 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.409 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.409 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.409 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.409 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.409 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.409 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.409 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.410 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.410 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.410 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.410 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.410 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.410 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.410 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.411 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.411 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.411 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.411 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.411 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.411 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.411 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.412 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.412 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.412 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.412 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.412 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.412 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.412 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.413 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.413 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.413 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.413 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.413 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.413 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.413 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.414 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.414 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.414 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.414 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.414 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.414 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.414 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.414 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.415 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.415 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.415 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.415 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.415 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.415 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.415 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.415 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.416 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.416 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.416 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.416 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.416 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.416 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.416 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.416 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.417 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.417 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.417 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.417 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.417 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.417 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.417 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.418 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.418 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.418 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.418 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.418 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.418 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.418 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.418 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.419 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.419 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.419 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.419 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.419 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.419 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.419 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.419 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.420 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.420 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.420 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.420 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.420 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.420 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.420 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.421 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.421 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.421 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.421 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.421 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.421 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.421 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.421 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.422 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.422 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.422 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.422 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.422 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.422 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.422 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.422 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.423 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.423 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.423 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.423 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.423 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.423 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.423 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.423 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.424 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.424 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.424 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.424 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.424 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.424 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.424 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.424 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.425 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.425 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.425 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.425 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.425 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.425 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.425 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.425 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.426 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.426 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.426 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.426 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.426 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.426 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.426 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.427 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.427 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.427 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.427 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.427 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.427 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.427 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.428 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.428 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.428 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.428 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.428 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.428 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.428 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.428 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.429 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.429 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.429 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.429 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.429 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.429 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.429 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.430 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.430 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.430 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.430 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.430 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.430 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.430 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.431 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.431 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.431 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.431 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.431 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.431 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.431 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.431 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.432 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.432 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.432 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.432 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.432 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.432 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.432 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.433 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.433 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.433 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.433 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.433 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.433 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.433 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.433 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.434 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.434 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.434 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.434 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.434 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.434 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.434 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.435 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.435 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.435 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.435 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.435 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.435 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.435 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.435 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.436 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.436 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.436 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.436 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.436 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.436 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.436 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.436 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.437 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.437 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.437 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.437 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.437 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.437 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.437 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.438 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.438 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.438 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.438 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.438 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.438 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.438 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.438 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.439 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.439 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.439 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.439 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.439 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.439 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.439 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.440 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.440 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.440 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.440 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.440 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.440 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.440 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.440 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.441 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.441 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.441 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.441 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.441 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.441 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.441 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.442 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.442 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.442 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.442 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.442 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.442 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.442 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.442 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.443 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.443 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.443 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.443 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.443 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.443 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.443 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.443 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.444 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.444 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.444 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.444 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.444 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.444 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.444 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.445 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.445 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.445 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.445 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.445 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.445 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.445 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.445 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.446 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.446 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.446 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.446 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.446 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.446 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.446 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.446 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.447 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.447 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.447 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.447 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.447 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.447 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.447 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.448 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.448 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.448 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.448 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.448 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.448 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.448 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.448 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.449 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.449 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.449 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.449 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.449 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.449 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.449 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.449 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.450 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.450 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.450 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.450 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.450 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.450 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.450 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.451 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.451 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.451 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.451 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.451 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.451 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.451 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.451 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.452 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.452 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.452 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.452 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.452 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.452 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.452 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.452 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.453 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.453 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.453 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.453 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.453 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.454 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.454 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.454 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.454 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.454 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.454 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.454 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.454 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.455 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.455 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.455 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.455 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.455 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.455 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.455 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.455 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.456 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.456 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.456 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.456 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.456 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.456 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.456 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.457 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.457 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.457 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.457 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.457 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.457 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.457 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.457 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.458 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.458 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.458 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.458 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.458 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.458 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.458 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.458 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.459 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.459 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.459 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.459 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.459 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.459 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.459 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.460 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.460 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.460 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.460 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.460 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.460 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.460 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.460 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.461 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.461 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.461 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.461 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.461 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.461 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.461 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.462 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.462 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.462 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.462 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.462 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.462 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.462 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.462 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.463 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.463 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.463 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.463 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.463 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.463 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.463 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.463 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.464 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.464 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.464 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.464 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.464 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.464 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.464 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.464 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.465 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.465 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.465 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.465 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.465 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.465 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.465 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.466 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.466 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.466 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.466 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.466 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.466 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.466 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.467 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.467 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.467 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.467 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.467 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.467 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.467 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.468 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.468 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.468 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.468 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.468 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.468 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.468 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.468 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.469 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.469 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.469 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.469 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.469 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.469 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.469 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.470 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.470 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.470 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.470 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.470 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.470 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.470 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.470 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.471 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.471 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.471 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.471 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.471 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.471 184600 WARNING oslo_config.cfg [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 06 15:28:22 compute-0 nova_compute[184587]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 06 15:28:22 compute-0 nova_compute[184587]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 06 15:28:22 compute-0 nova_compute[184587]: and ``live_migration_inbound_addr`` respectively.
Jan 06 15:28:22 compute-0 nova_compute[184587]: ).  Its value may be silently ignored in the future.
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.472 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.472 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.472 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.472 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.472 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.472 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.472 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.473 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.473 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.473 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.473 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.473 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.473 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.473 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.474 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.474 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.474 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.474 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.474 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.474 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.474 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.474 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.475 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.475 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.475 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.475 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.475 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.475 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.475 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.476 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.476 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.476 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.476 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.476 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.476 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.476 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.477 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.477 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.477 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.477 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.477 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.477 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.477 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.477 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.478 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.478 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.478 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.478 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.478 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.478 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.478 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.479 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.479 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.479 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.479 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.479 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.479 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.479 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.480 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.480 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.480 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.480 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.480 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.480 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.480 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.481 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.481 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.481 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.481 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.481 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.481 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.481 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.481 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.482 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.482 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.482 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.482 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.482 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.482 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.482 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.482 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.483 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.483 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.483 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.483 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.483 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.483 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.483 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.484 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.484 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.484 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.484 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.484 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.484 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.484 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.485 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.485 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.485 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.485 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.485 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.485 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.485 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.486 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.486 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.486 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.486 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.486 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.486 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.486 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.487 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.487 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.487 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.487 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.487 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.487 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.487 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.487 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.488 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.488 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.488 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.488 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.488 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.488 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.488 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.489 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.489 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.489 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.489 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.489 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.489 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.489 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.490 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.490 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.490 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.490 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.490 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.490 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.490 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.491 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.491 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.491 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.491 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.491 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.491 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.492 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.492 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.492 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.492 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.492 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.492 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.492 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.493 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.493 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.493 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.493 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.493 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.493 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.493 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.494 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.494 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.494 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.494 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.494 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.494 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.494 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.494 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.495 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.495 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.495 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.495 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.495 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.495 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.496 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.496 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.496 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.496 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.496 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.496 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.497 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.497 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.497 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.497 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.497 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.497 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.497 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.498 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.498 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.498 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.498 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.498 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.498 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.498 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.499 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.499 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.499 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.499 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.499 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.499 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.500 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.500 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.500 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.500 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.500 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.500 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.500 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.501 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.501 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.501 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.501 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.501 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.501 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.501 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.501 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.502 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.502 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.502 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.502 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.502 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.502 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.502 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.503 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.503 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.503 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.503 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.503 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.503 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.503 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.504 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.504 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.504 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.504 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.504 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.504 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.504 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.505 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.505 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.505 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.505 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.505 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.505 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.505 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.506 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.506 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.506 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.506 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.506 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.506 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.507 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.507 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.507 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.507 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.507 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.508 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.508 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.508 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.508 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.508 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.508 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.508 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.509 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.509 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.509 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.509 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.509 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.509 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.509 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.510 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.510 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.510 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.510 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.510 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.510 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.510 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.510 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.511 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.511 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.511 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.511 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.511 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.511 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.512 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.512 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.512 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.512 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.512 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.512 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.513 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.513 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.513 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.513 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.513 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.513 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.513 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.514 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.514 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.514 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.514 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.514 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.514 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.515 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.515 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.515 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.515 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.515 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.515 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.515 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.516 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.516 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.516 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.516 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.516 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.516 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.516 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.517 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.517 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.517 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.517 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.517 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.517 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.517 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.518 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.518 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.518 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.518 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.518 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.518 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.518 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.519 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.519 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.519 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.519 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.519 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.519 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.519 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.520 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.520 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.520 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.520 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.520 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.521 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.521 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.521 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.521 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.521 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.522 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.522 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.522 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.522 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.522 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.522 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.523 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.523 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.523 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.523 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.523 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.523 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.523 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.524 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.524 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.524 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.524 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.524 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.524 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.524 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.524 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.525 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.525 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.525 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.525 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.525 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.525 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.525 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.526 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.526 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.526 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.526 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.526 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.526 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.526 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.526 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.527 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.527 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.527 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.527 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.527 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.527 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.527 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.528 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.528 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.528 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.528 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.528 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.528 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.528 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.529 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.529 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.529 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.529 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.529 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.529 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.529 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.530 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.530 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.530 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.530 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.530 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.530 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.530 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.530 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.531 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.531 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.531 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.531 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.531 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.531 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.531 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.532 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.532 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.532 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.532 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.532 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.532 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.533 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.533 184600 DEBUG oslo_service.service [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.534 184600 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.548 184600 DEBUG nova.virt.libvirt.host [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.548 184600 DEBUG nova.virt.libvirt.host [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.549 184600 DEBUG nova.virt.libvirt.host [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.549 184600 DEBUG nova.virt.libvirt.host [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 06 15:28:22 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 06 15:28:22 compute-0 sudo[185213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nakppikmykhrgdjqaekyudjqihokjgzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713302.0918567-1270-155861761158473/AnsiballZ_podman_container.py'
Jan 06 15:28:22 compute-0 sudo[185213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:22 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.634 184600 DEBUG nova.virt.libvirt.host [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd8636cf1f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.637 184600 DEBUG nova.virt.libvirt.host [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd8636cf1f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.638 184600 INFO nova.virt.libvirt.driver [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Connection event '1' reason 'None'
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.660 184600 WARNING nova.virt.libvirt.driver [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 06 15:28:22 compute-0 nova_compute[184587]: 2026-01-06 15:28:22.661 184600 DEBUG nova.virt.libvirt.volume.mount [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 06 15:28:22 compute-0 python3.9[185237]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 06 15:28:22 compute-0 sudo[185213]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:22 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 06 15:28:22 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 06 15:28:23 compute-0 sudo[185448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zehsyglxhlgvfzoexeuanypxppmtvuhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713303.1701276-1278-78183706057627/AnsiballZ_systemd.py'
Jan 06 15:28:23 compute-0 sudo[185448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:23 compute-0 nova_compute[184587]: 2026-01-06 15:28:23.532 184600 INFO nova.virt.libvirt.host [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Libvirt host capabilities <capabilities>
Jan 06 15:28:23 compute-0 nova_compute[184587]: 
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <host>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <uuid>f243d16a-3dea-407f-9cc3-41cda7bb8d99</uuid>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <cpu>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <arch>x86_64</arch>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model>EPYC-Rome-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <vendor>AMD</vendor>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <microcode version='16777317'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <signature family='23' model='49' stepping='0'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='x2apic'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='tsc-deadline'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='osxsave'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='hypervisor'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='tsc_adjust'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='spec-ctrl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='stibp'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='arch-capabilities'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='ssbd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='cmp_legacy'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='topoext'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='virt-ssbd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='lbrv'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='tsc-scale'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='vmcb-clean'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='pause-filter'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='pfthreshold'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='svme-addr-chk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='rdctl-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='skip-l1dfl-vmentry'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='mds-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature name='pschange-mc-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <pages unit='KiB' size='4'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <pages unit='KiB' size='2048'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <pages unit='KiB' size='1048576'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </cpu>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <power_management>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <suspend_mem/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <suspend_disk/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <suspend_hybrid/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </power_management>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <iommu support='no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <migration_features>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <live/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <uri_transports>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <uri_transport>tcp</uri_transport>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <uri_transport>rdma</uri_transport>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </uri_transports>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </migration_features>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <topology>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <cells num='1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <cell id='0'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:           <memory unit='KiB'>7864312</memory>
Jan 06 15:28:23 compute-0 nova_compute[184587]:           <pages unit='KiB' size='4'>1966078</pages>
Jan 06 15:28:23 compute-0 nova_compute[184587]:           <pages unit='KiB' size='2048'>0</pages>
Jan 06 15:28:23 compute-0 nova_compute[184587]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 06 15:28:23 compute-0 nova_compute[184587]:           <distances>
Jan 06 15:28:23 compute-0 nova_compute[184587]:             <sibling id='0' value='10'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:           </distances>
Jan 06 15:28:23 compute-0 nova_compute[184587]:           <cpus num='8'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:           </cpus>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         </cell>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </cells>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </topology>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <cache>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </cache>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <secmodel>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model>selinux</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <doi>0</doi>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </secmodel>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <secmodel>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model>dac</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <doi>0</doi>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </secmodel>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </host>
Jan 06 15:28:23 compute-0 nova_compute[184587]: 
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <guest>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <os_type>hvm</os_type>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <arch name='i686'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <wordsize>32</wordsize>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <domain type='qemu'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <domain type='kvm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </arch>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <features>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <pae/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <nonpae/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <acpi default='on' toggle='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <apic default='on' toggle='no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <cpuselection/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <deviceboot/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <disksnapshot default='on' toggle='no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <externalSnapshot/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </features>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </guest>
Jan 06 15:28:23 compute-0 nova_compute[184587]: 
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <guest>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <os_type>hvm</os_type>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <arch name='x86_64'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <wordsize>64</wordsize>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <domain type='qemu'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <domain type='kvm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </arch>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <features>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <acpi default='on' toggle='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <apic default='on' toggle='no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <cpuselection/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <deviceboot/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <disksnapshot default='on' toggle='no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <externalSnapshot/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </features>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </guest>
Jan 06 15:28:23 compute-0 nova_compute[184587]: 
Jan 06 15:28:23 compute-0 nova_compute[184587]: </capabilities>
Jan 06 15:28:23 compute-0 nova_compute[184587]: 
Jan 06 15:28:23 compute-0 nova_compute[184587]: 2026-01-06 15:28:23.542 184600 DEBUG nova.virt.libvirt.host [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 06 15:28:23 compute-0 nova_compute[184587]: 2026-01-06 15:28:23.563 184600 DEBUG nova.virt.libvirt.host [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 06 15:28:23 compute-0 nova_compute[184587]: <domainCapabilities>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <path>/usr/libexec/qemu-kvm</path>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <domain>kvm</domain>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <arch>i686</arch>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <vcpu max='240'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <iothreads supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <os supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <enum name='firmware'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <loader supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='type'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>rom</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>pflash</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='readonly'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>yes</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>no</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='secure'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>no</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </loader>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </os>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <cpu>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <mode name='host-passthrough' supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='hostPassthroughMigratable'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>on</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>off</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </mode>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <mode name='maximum' supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='maximumMigratable'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>on</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>off</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </mode>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <mode name='host-model' supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <vendor>AMD</vendor>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='x2apic'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='tsc-deadline'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='hypervisor'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='tsc_adjust'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='spec-ctrl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='stibp'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='ssbd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='cmp_legacy'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='overflow-recov'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='succor'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='ibrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='amd-ssbd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='virt-ssbd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='lbrv'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='tsc-scale'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='vmcb-clean'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='flushbyasid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='pause-filter'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='pfthreshold'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='svme-addr-chk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='disable' name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </mode>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <mode name='custom' supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-noTSX'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-v5'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cooperlake'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cooperlake-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cooperlake-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Denverton'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mpx'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Denverton-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mpx'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Denverton-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Denverton-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Dhyana-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Genoa'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amd-psfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='auto-ibrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='stibp-always-on'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Genoa-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amd-psfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='auto-ibrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='stibp-always-on'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Milan'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Milan-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Milan-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amd-psfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='stibp-always-on'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Rome'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Rome-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Rome-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Rome-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='GraniteRapids'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mcdt-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pbrsb-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='prefetchiti'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='GraniteRapids-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mcdt-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pbrsb-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='prefetchiti'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='GraniteRapids-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx10'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx10-128'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx10-256'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx10-512'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mcdt-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pbrsb-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='prefetchiti'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-noTSX'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-noTSX'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v5'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v6'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v7'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='IvyBridge'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='IvyBridge-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='IvyBridge-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='IvyBridge-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='KnightsMill'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-4fmaps'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-4vnniw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512er'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512pf'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='KnightsMill-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-4fmaps'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-4vnniw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512er'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512pf'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Opteron_G4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fma4'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xop'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Opteron_G4-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fma4'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xop'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Opteron_G5'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fma4'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tbm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xop'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Opteron_G5-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fma4'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tbm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xop'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SapphireRapids'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SapphireRapids-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SapphireRapids-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SapphireRapids-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SierraForest'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-ne-convert'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cmpccxadd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mcdt-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pbrsb-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SierraForest-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-ne-convert'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cmpccxadd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mcdt-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pbrsb-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-v5'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Snowridge'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='core-capability'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mpx'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='split-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Snowridge-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='core-capability'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mpx'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='split-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Snowridge-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='core-capability'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='split-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Snowridge-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='core-capability'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='split-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Snowridge-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='athlon'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnow'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnowext'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='athlon-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnow'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnowext'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='core2duo'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='core2duo-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='coreduo'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='coreduo-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='n270'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='n270-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='phenom'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnow'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnowext'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='phenom-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnow'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnowext'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </mode>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </cpu>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <memoryBacking supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <enum name='sourceType'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <value>file</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <value>anonymous</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <value>memfd</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </memoryBacking>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <devices>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <disk supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='diskDevice'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>disk</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>cdrom</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>floppy</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>lun</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='bus'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>ide</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>fdc</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>scsi</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>usb</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>sata</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='model'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio-transitional</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio-non-transitional</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </disk>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <graphics supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='type'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vnc</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>egl-headless</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>dbus</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </graphics>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <video supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='modelType'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vga</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>cirrus</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>none</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>bochs</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>ramfb</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </video>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <hostdev supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='mode'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>subsystem</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='startupPolicy'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>default</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>mandatory</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>requisite</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>optional</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='subsysType'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>usb</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>pci</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>scsi</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='capsType'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='pciBackend'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </hostdev>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <rng supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='model'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio-transitional</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio-non-transitional</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='backendModel'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>random</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>egd</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>builtin</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </rng>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <filesystem supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='driverType'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>path</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>handle</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtiofs</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </filesystem>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <tpm supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='model'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>tpm-tis</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>tpm-crb</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='backendModel'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>emulator</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>external</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='backendVersion'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>2.0</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </tpm>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <redirdev supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='bus'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>usb</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </redirdev>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <channel supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='type'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>pty</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>unix</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </channel>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <crypto supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='model'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='type'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>qemu</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='backendModel'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>builtin</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </crypto>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <interface supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='backendType'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>default</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>passt</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </interface>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <panic supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='model'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>isa</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>hyperv</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </panic>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <console supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='type'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>null</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vc</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>pty</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>dev</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>file</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>pipe</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>stdio</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>udp</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>tcp</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>unix</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>qemu-vdagent</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>dbus</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </console>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </devices>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <features>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <gic supported='no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <vmcoreinfo supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <genid supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <backingStoreInput supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <backup supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <async-teardown supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <ps2 supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <sev supported='no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <sgx supported='no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <hyperv supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='features'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>relaxed</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vapic</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>spinlocks</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vpindex</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>runtime</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>synic</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>stimer</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>reset</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vendor_id</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>frequencies</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>reenlightenment</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>tlbflush</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>ipi</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>avic</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>emsr_bitmap</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>xmm_input</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <defaults>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <spinlocks>4095</spinlocks>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <stimer_direct>on</stimer_direct>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <tlbflush_direct>on</tlbflush_direct>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <tlbflush_extended>on</tlbflush_extended>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </defaults>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </hyperv>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <launchSecurity supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='sectype'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>tdx</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </launchSecurity>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </features>
Jan 06 15:28:23 compute-0 nova_compute[184587]: </domainCapabilities>
Jan 06 15:28:23 compute-0 nova_compute[184587]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 06 15:28:23 compute-0 nova_compute[184587]: 2026-01-06 15:28:23.572 184600 DEBUG nova.virt.libvirt.host [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 06 15:28:23 compute-0 nova_compute[184587]: <domainCapabilities>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <path>/usr/libexec/qemu-kvm</path>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <domain>kvm</domain>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <arch>i686</arch>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <vcpu max='4096'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <iothreads supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <os supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <enum name='firmware'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <loader supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='type'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>rom</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>pflash</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='readonly'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>yes</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>no</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='secure'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>no</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </loader>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </os>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <cpu>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <mode name='host-passthrough' supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='hostPassthroughMigratable'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>on</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>off</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </mode>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <mode name='maximum' supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='maximumMigratable'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>on</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>off</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </mode>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <mode name='host-model' supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <vendor>AMD</vendor>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='x2apic'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='tsc-deadline'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='hypervisor'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='tsc_adjust'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='spec-ctrl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='stibp'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='ssbd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='cmp_legacy'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='overflow-recov'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='succor'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='ibrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='amd-ssbd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='virt-ssbd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='lbrv'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='tsc-scale'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='vmcb-clean'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='flushbyasid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='pause-filter'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='pfthreshold'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='svme-addr-chk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='disable' name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </mode>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <mode name='custom' supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-noTSX'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-v5'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cooperlake'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cooperlake-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cooperlake-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Denverton'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mpx'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Denverton-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mpx'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Denverton-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Denverton-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Dhyana-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Genoa'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amd-psfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='auto-ibrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='stibp-always-on'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Genoa-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amd-psfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='auto-ibrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='stibp-always-on'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Milan'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Milan-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Milan-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amd-psfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='stibp-always-on'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Rome'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Rome-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Rome-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Rome-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='GraniteRapids'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mcdt-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pbrsb-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='prefetchiti'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='GraniteRapids-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mcdt-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pbrsb-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='prefetchiti'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='GraniteRapids-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx10'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx10-128'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx10-256'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx10-512'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mcdt-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pbrsb-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='prefetchiti'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-noTSX'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-noTSX'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v5'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v6'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v7'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='IvyBridge'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='IvyBridge-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='IvyBridge-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='IvyBridge-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='KnightsMill'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-4fmaps'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-4vnniw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512er'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512pf'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='KnightsMill-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-4fmaps'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-4vnniw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512er'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512pf'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Opteron_G4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fma4'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xop'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Opteron_G4-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fma4'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xop'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Opteron_G5'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fma4'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tbm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xop'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Opteron_G5-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fma4'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tbm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xop'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SapphireRapids'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SapphireRapids-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SapphireRapids-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SapphireRapids-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SierraForest'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-ne-convert'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cmpccxadd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mcdt-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pbrsb-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SierraForest-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-ne-convert'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cmpccxadd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mcdt-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pbrsb-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-v5'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Snowridge'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='core-capability'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mpx'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='split-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Snowridge-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='core-capability'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mpx'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='split-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Snowridge-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='core-capability'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='split-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Snowridge-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='core-capability'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='split-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Snowridge-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='athlon'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnow'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnowext'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='athlon-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnow'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnowext'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='core2duo'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='core2duo-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='coreduo'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='coreduo-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='n270'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='n270-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='phenom'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnow'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnowext'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='phenom-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnow'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnowext'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </mode>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </cpu>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <memoryBacking supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <enum name='sourceType'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <value>file</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <value>anonymous</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <value>memfd</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </memoryBacking>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <devices>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <disk supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='diskDevice'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>disk</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>cdrom</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>floppy</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>lun</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='bus'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>fdc</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>scsi</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>usb</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>sata</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='model'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio-transitional</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio-non-transitional</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </disk>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <graphics supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='type'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vnc</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>egl-headless</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>dbus</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </graphics>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <video supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='modelType'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vga</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>cirrus</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>none</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>bochs</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>ramfb</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </video>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <hostdev supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='mode'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>subsystem</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='startupPolicy'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>default</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>mandatory</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>requisite</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>optional</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='subsysType'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>usb</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>pci</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>scsi</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='capsType'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='pciBackend'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </hostdev>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <rng supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='model'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio-transitional</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio-non-transitional</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='backendModel'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>random</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>egd</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>builtin</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </rng>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <filesystem supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='driverType'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>path</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>handle</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtiofs</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </filesystem>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <tpm supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='model'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>tpm-tis</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>tpm-crb</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='backendModel'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>emulator</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>external</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='backendVersion'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>2.0</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </tpm>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <redirdev supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='bus'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>usb</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </redirdev>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <channel supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='type'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>pty</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>unix</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </channel>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <crypto supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='model'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='type'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>qemu</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='backendModel'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>builtin</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </crypto>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <interface supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='backendType'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>default</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>passt</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </interface>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <panic supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='model'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>isa</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>hyperv</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </panic>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <console supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='type'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>null</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vc</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>pty</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>dev</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>file</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>pipe</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>stdio</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>udp</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>tcp</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>unix</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>qemu-vdagent</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>dbus</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </console>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </devices>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <features>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <gic supported='no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <vmcoreinfo supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <genid supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <backingStoreInput supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <backup supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <async-teardown supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <ps2 supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <sev supported='no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <sgx supported='no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <hyperv supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='features'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>relaxed</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vapic</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>spinlocks</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vpindex</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>runtime</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>synic</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>stimer</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>reset</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vendor_id</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>frequencies</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>reenlightenment</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>tlbflush</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>ipi</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>avic</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>emsr_bitmap</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>xmm_input</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <defaults>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <spinlocks>4095</spinlocks>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <stimer_direct>on</stimer_direct>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <tlbflush_direct>on</tlbflush_direct>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <tlbflush_extended>on</tlbflush_extended>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </defaults>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </hyperv>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <launchSecurity supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='sectype'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>tdx</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </launchSecurity>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </features>
Jan 06 15:28:23 compute-0 nova_compute[184587]: </domainCapabilities>
Jan 06 15:28:23 compute-0 nova_compute[184587]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 06 15:28:23 compute-0 nova_compute[184587]: 2026-01-06 15:28:23.624 184600 DEBUG nova.virt.libvirt.host [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 06 15:28:23 compute-0 nova_compute[184587]: 2026-01-06 15:28:23.628 184600 DEBUG nova.virt.libvirt.host [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 06 15:28:23 compute-0 nova_compute[184587]: <domainCapabilities>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <path>/usr/libexec/qemu-kvm</path>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <domain>kvm</domain>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <arch>x86_64</arch>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <vcpu max='240'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <iothreads supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <os supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <enum name='firmware'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <loader supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='type'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>rom</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>pflash</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='readonly'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>yes</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>no</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='secure'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>no</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </loader>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </os>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <cpu>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <mode name='host-passthrough' supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='hostPassthroughMigratable'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>on</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>off</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </mode>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <mode name='maximum' supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='maximumMigratable'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>on</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>off</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </mode>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <mode name='host-model' supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <vendor>AMD</vendor>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='x2apic'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='tsc-deadline'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='hypervisor'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='tsc_adjust'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='spec-ctrl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='stibp'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='ssbd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='cmp_legacy'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='overflow-recov'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='succor'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='ibrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='amd-ssbd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='virt-ssbd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='lbrv'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='tsc-scale'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='vmcb-clean'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='flushbyasid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='pause-filter'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='pfthreshold'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='svme-addr-chk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='disable' name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </mode>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <mode name='custom' supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-noTSX'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-v5'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cooperlake'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cooperlake-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cooperlake-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Denverton'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mpx'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Denverton-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mpx'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Denverton-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Denverton-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Dhyana-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Genoa'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amd-psfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='auto-ibrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='stibp-always-on'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Genoa-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amd-psfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='auto-ibrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='stibp-always-on'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Milan'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Milan-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Milan-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amd-psfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='stibp-always-on'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Rome'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Rome-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Rome-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Rome-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='GraniteRapids'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mcdt-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pbrsb-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='prefetchiti'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='GraniteRapids-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mcdt-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pbrsb-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='prefetchiti'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='GraniteRapids-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx10'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx10-128'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx10-256'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx10-512'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mcdt-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pbrsb-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='prefetchiti'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-noTSX'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-noTSX'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v5'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v6'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v7'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='IvyBridge'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='IvyBridge-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='IvyBridge-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='IvyBridge-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='KnightsMill'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-4fmaps'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-4vnniw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512er'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512pf'/>
Jan 06 15:28:23 compute-0 python3.9[185450]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='KnightsMill-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-4fmaps'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-4vnniw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512er'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512pf'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Opteron_G4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fma4'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xop'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Opteron_G4-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fma4'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xop'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Opteron_G5'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fma4'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tbm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xop'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Opteron_G5-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fma4'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tbm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xop'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SapphireRapids'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SapphireRapids-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SapphireRapids-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SapphireRapids-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SierraForest'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-ne-convert'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cmpccxadd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mcdt-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pbrsb-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SierraForest-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-ne-convert'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cmpccxadd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mcdt-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pbrsb-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-v5'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Snowridge'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='core-capability'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mpx'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='split-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Snowridge-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='core-capability'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mpx'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='split-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Snowridge-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='core-capability'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='split-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Snowridge-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='core-capability'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='split-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Snowridge-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='athlon'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnow'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnowext'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='athlon-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnow'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnowext'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='core2duo'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='core2duo-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='coreduo'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='coreduo-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='n270'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='n270-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='phenom'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnow'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnowext'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='phenom-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnow'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnowext'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </mode>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </cpu>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <memoryBacking supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <enum name='sourceType'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <value>file</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <value>anonymous</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <value>memfd</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </memoryBacking>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <devices>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <disk supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='diskDevice'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>disk</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>cdrom</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>floppy</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>lun</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='bus'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>ide</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>fdc</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>scsi</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>usb</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>sata</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='model'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio-transitional</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio-non-transitional</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </disk>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <graphics supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='type'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vnc</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>egl-headless</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>dbus</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </graphics>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <video supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='modelType'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vga</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>cirrus</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>none</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>bochs</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>ramfb</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </video>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <hostdev supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='mode'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>subsystem</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='startupPolicy'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>default</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>mandatory</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>requisite</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>optional</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='subsysType'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>usb</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>pci</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>scsi</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='capsType'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='pciBackend'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </hostdev>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <rng supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='model'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio-transitional</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio-non-transitional</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='backendModel'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>random</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>egd</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>builtin</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </rng>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <filesystem supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='driverType'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>path</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>handle</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtiofs</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </filesystem>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <tpm supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='model'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>tpm-tis</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>tpm-crb</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='backendModel'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>emulator</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>external</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='backendVersion'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>2.0</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </tpm>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <redirdev supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='bus'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>usb</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </redirdev>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <channel supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='type'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>pty</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>unix</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </channel>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <crypto supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='model'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='type'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>qemu</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='backendModel'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>builtin</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </crypto>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <interface supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='backendType'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>default</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>passt</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </interface>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <panic supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='model'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>isa</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>hyperv</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </panic>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <console supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='type'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>null</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vc</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>pty</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>dev</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>file</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>pipe</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>stdio</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>udp</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>tcp</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>unix</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>qemu-vdagent</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>dbus</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </console>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </devices>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <features>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <gic supported='no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <vmcoreinfo supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <genid supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <backingStoreInput supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <backup supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <async-teardown supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <ps2 supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <sev supported='no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <sgx supported='no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <hyperv supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='features'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>relaxed</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vapic</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>spinlocks</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vpindex</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>runtime</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>synic</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>stimer</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>reset</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vendor_id</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>frequencies</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>reenlightenment</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>tlbflush</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>ipi</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>avic</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>emsr_bitmap</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>xmm_input</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <defaults>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <spinlocks>4095</spinlocks>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <stimer_direct>on</stimer_direct>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <tlbflush_direct>on</tlbflush_direct>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <tlbflush_extended>on</tlbflush_extended>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </defaults>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </hyperv>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <launchSecurity supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='sectype'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>tdx</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </launchSecurity>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </features>
Jan 06 15:28:23 compute-0 nova_compute[184587]: </domainCapabilities>
Jan 06 15:28:23 compute-0 nova_compute[184587]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 06 15:28:23 compute-0 nova_compute[184587]: 2026-01-06 15:28:23.712 184600 DEBUG nova.virt.libvirt.host [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 06 15:28:23 compute-0 nova_compute[184587]: <domainCapabilities>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <path>/usr/libexec/qemu-kvm</path>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <domain>kvm</domain>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <arch>x86_64</arch>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <vcpu max='4096'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <iothreads supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <os supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <enum name='firmware'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <value>efi</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <loader supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='type'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>rom</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>pflash</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='readonly'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>yes</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>no</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='secure'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>yes</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>no</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </loader>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </os>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <cpu>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <mode name='host-passthrough' supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='hostPassthroughMigratable'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>on</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>off</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </mode>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <mode name='maximum' supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='maximumMigratable'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>on</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>off</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </mode>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <mode name='host-model' supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <vendor>AMD</vendor>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='x2apic'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='tsc-deadline'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='hypervisor'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='tsc_adjust'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='spec-ctrl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='stibp'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='ssbd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='cmp_legacy'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='overflow-recov'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='succor'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='ibrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='amd-ssbd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='virt-ssbd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='lbrv'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='tsc-scale'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='vmcb-clean'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='flushbyasid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='pause-filter'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='pfthreshold'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='svme-addr-chk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <feature policy='disable' name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </mode>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <mode name='custom' supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-noTSX'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Broadwell-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cascadelake-Server-v5'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cooperlake'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cooperlake-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Cooperlake-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Denverton'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mpx'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Denverton-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mpx'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Denverton-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Denverton-v3'>
Jan 06 15:28:23 compute-0 systemd[1]: Stopping nova_compute container...
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Dhyana-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Genoa'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amd-psfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='auto-ibrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='stibp-always-on'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Genoa-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amd-psfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='auto-ibrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='stibp-always-on'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Milan'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Milan-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Milan-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amd-psfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='stibp-always-on'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Rome'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Rome-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Rome-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-Rome-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='EPYC-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='GraniteRapids'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mcdt-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pbrsb-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='prefetchiti'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='GraniteRapids-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mcdt-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pbrsb-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='prefetchiti'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='GraniteRapids-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx10'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx10-128'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx10-256'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx10-512'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mcdt-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pbrsb-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='prefetchiti'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-noTSX'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Haswell-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-noTSX'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v5'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v6'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Icelake-Server-v7'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='IvyBridge'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='IvyBridge-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='IvyBridge-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='IvyBridge-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='KnightsMill'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-4fmaps'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-4vnniw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512er'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512pf'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='KnightsMill-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-4fmaps'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-4vnniw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512er'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512pf'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Opteron_G4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fma4'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xop'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Opteron_G4-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fma4'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xop'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Opteron_G5'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fma4'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tbm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xop'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Opteron_G5-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fma4'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tbm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xop'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SapphireRapids'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SapphireRapids-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SapphireRapids-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SapphireRapids-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='amx-tile'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-bf16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-fp16'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bitalg'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrc'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fzrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='la57'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='taa-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xfd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SierraForest'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-ne-convert'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cmpccxadd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mcdt-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pbrsb-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='SierraForest-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-ifma'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-ne-convert'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx-vnni-int8'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cmpccxadd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fbsdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='fsrs'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ibrs-all'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mcdt-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pbrsb-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='psdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='serialize'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vaes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Client-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='hle'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='rtm'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Skylake-Server-v5'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512bw'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512cd'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512dq'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512f'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='avx512vl'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='invpcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pcid'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='pku'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Snowridge'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='core-capability'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mpx'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='split-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Snowridge-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='core-capability'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='mpx'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='split-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Snowridge-v2'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='core-capability'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='split-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Snowridge-v3'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='core-capability'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='split-lock-detect'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='Snowridge-v4'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='cldemote'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='erms'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='gfni'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdir64b'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='movdiri'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='xsaves'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='athlon'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnow'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnowext'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='athlon-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnow'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnowext'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='core2duo'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='core2duo-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='coreduo'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='coreduo-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='n270'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='n270-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='ss'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='phenom'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnow'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnowext'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <blockers model='phenom-v1'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnow'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <feature name='3dnowext'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </blockers>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </mode>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </cpu>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <memoryBacking supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <enum name='sourceType'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <value>file</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <value>anonymous</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <value>memfd</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </memoryBacking>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <devices>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <disk supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='diskDevice'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>disk</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>cdrom</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>floppy</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>lun</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='bus'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>fdc</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>scsi</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>usb</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>sata</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='model'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio-transitional</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio-non-transitional</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </disk>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <graphics supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='type'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vnc</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>egl-headless</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>dbus</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </graphics>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <video supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='modelType'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vga</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>cirrus</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>none</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>bochs</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>ramfb</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </video>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <hostdev supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='mode'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>subsystem</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='startupPolicy'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>default</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>mandatory</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>requisite</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>optional</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='subsysType'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>usb</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>pci</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>scsi</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='capsType'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='pciBackend'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </hostdev>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <rng supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='model'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio-transitional</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtio-non-transitional</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='backendModel'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>random</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>egd</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>builtin</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </rng>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <filesystem supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='driverType'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>path</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>handle</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>virtiofs</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </filesystem>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <tpm supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='model'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>tpm-tis</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>tpm-crb</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='backendModel'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>emulator</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>external</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='backendVersion'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>2.0</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </tpm>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <redirdev supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='bus'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>usb</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </redirdev>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <channel supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='type'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>pty</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>unix</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </channel>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <crypto supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='model'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='type'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>qemu</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='backendModel'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>builtin</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </crypto>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <interface supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='backendType'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>default</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>passt</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </interface>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <panic supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='model'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>isa</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>hyperv</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </panic>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <console supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='type'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>null</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vc</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>pty</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>dev</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>file</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>pipe</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>stdio</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>udp</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>tcp</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>unix</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>qemu-vdagent</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>dbus</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </console>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </devices>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   <features>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <gic supported='no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <vmcoreinfo supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <genid supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <backingStoreInput supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <backup supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <async-teardown supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <ps2 supported='yes'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <sev supported='no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <sgx supported='no'/>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <hyperv supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='features'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>relaxed</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vapic</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>spinlocks</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vpindex</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>runtime</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>synic</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>stimer</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>reset</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>vendor_id</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>frequencies</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>reenlightenment</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>tlbflush</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>ipi</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>avic</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>emsr_bitmap</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>xmm_input</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <defaults>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <spinlocks>4095</spinlocks>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <stimer_direct>on</stimer_direct>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <tlbflush_direct>on</tlbflush_direct>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <tlbflush_extended>on</tlbflush_extended>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </defaults>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </hyperv>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     <launchSecurity supported='yes'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       <enum name='sectype'>
Jan 06 15:28:23 compute-0 nova_compute[184587]:         <value>tdx</value>
Jan 06 15:28:23 compute-0 nova_compute[184587]:       </enum>
Jan 06 15:28:23 compute-0 nova_compute[184587]:     </launchSecurity>
Jan 06 15:28:23 compute-0 nova_compute[184587]:   </features>
Jan 06 15:28:23 compute-0 nova_compute[184587]: </domainCapabilities>
Jan 06 15:28:23 compute-0 nova_compute[184587]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 06 15:28:23 compute-0 nova_compute[184587]: 2026-01-06 15:28:23.825 184600 DEBUG nova.virt.libvirt.host [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 06 15:28:23 compute-0 nova_compute[184587]: 2026-01-06 15:28:23.826 184600 DEBUG nova.virt.libvirt.host [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 06 15:28:23 compute-0 nova_compute[184587]: 2026-01-06 15:28:23.826 184600 DEBUG nova.virt.libvirt.host [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 06 15:28:23 compute-0 nova_compute[184587]: 2026-01-06 15:28:23.826 184600 INFO nova.virt.libvirt.host [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Secure Boot support detected
Jan 06 15:28:23 compute-0 nova_compute[184587]: 2026-01-06 15:28:23.828 184600 INFO nova.virt.libvirt.driver [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 06 15:28:23 compute-0 nova_compute[184587]: 2026-01-06 15:28:23.829 184600 INFO nova.virt.libvirt.driver [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 06 15:28:23 compute-0 nova_compute[184587]: 2026-01-06 15:28:23.840 184600 DEBUG nova.virt.libvirt.driver [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 06 15:28:23 compute-0 nova_compute[184587]: 2026-01-06 15:28:23.877 184600 INFO nova.virt.node [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Determined node identity 6e7a5a7f-91c3-4b82-b43d-f32569e61608 from /var/lib/nova/compute_id
Jan 06 15:28:23 compute-0 nova_compute[184587]: 2026-01-06 15:28:23.900 184600 WARNING nova.compute.manager [None req-c1e8abde-b935-4321-81a3-b54652256e02 - - - - - -] Compute nodes ['6e7a5a7f-91c3-4b82-b43d-f32569e61608'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 06 15:28:23 compute-0 nova_compute[184587]: 2026-01-06 15:28:23.918 184600 DEBUG oslo_concurrency.lockutils [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 06 15:28:23 compute-0 nova_compute[184587]: 2026-01-06 15:28:23.918 184600 DEBUG oslo_concurrency.lockutils [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 06 15:28:23 compute-0 nova_compute[184587]: 2026-01-06 15:28:23.919 184600 DEBUG oslo_concurrency.lockutils [None req-dbb5192c-1899-4fc1-969a-7f3a76b3a01b - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 06 15:28:24 compute-0 virtqemud[185235]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Jan 06 15:28:24 compute-0 virtqemud[185235]: hostname: compute-0
Jan 06 15:28:24 compute-0 virtqemud[185235]: End of file while reading data: Input/output error
Jan 06 15:28:24 compute-0 systemd[1]: libpod-55b8ec43af1fcd596145fa6a59e91743fafd2328d1bbf0a28ef963ea14cfff22.scope: Deactivated successfully.
Jan 06 15:28:24 compute-0 systemd[1]: libpod-55b8ec43af1fcd596145fa6a59e91743fafd2328d1bbf0a28ef963ea14cfff22.scope: Consumed 3.165s CPU time.
Jan 06 15:28:24 compute-0 conmon[184587]: conmon 55b8ec43af1fcd596145 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-55b8ec43af1fcd596145fa6a59e91743fafd2328d1bbf0a28ef963ea14cfff22.scope/container/memory.events
Jan 06 15:28:24 compute-0 podman[185458]: 2026-01-06 15:28:24.344927789 +0000 UTC m=+0.483916071 container died 55b8ec43af1fcd596145fa6a59e91743fafd2328d1bbf0a28ef963ea14cfff22 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 06 15:28:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-55b8ec43af1fcd596145fa6a59e91743fafd2328d1bbf0a28ef963ea14cfff22-userdata-shm.mount: Deactivated successfully.
Jan 06 15:28:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c2c8906b2185567afe49b9d0232f2e3e0cf58688bf5e5d87567635730a43211-merged.mount: Deactivated successfully.
Jan 06 15:28:24 compute-0 podman[185458]: 2026-01-06 15:28:24.405526624 +0000 UTC m=+0.544514886 container cleanup 55b8ec43af1fcd596145fa6a59e91743fafd2328d1bbf0a28ef963ea14cfff22 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3)
Jan 06 15:28:24 compute-0 podman[185458]: nova_compute
Jan 06 15:28:24 compute-0 podman[185484]: nova_compute
Jan 06 15:28:24 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 06 15:28:24 compute-0 systemd[1]: Stopped nova_compute container.
Jan 06 15:28:24 compute-0 systemd[1]: Starting nova_compute container...
Jan 06 15:28:24 compute-0 systemd[1]: Started libcrun container.
Jan 06 15:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2c8906b2185567afe49b9d0232f2e3e0cf58688bf5e5d87567635730a43211/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 06 15:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2c8906b2185567afe49b9d0232f2e3e0cf58688bf5e5d87567635730a43211/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 06 15:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2c8906b2185567afe49b9d0232f2e3e0cf58688bf5e5d87567635730a43211/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 06 15:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2c8906b2185567afe49b9d0232f2e3e0cf58688bf5e5d87567635730a43211/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 06 15:28:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2c8906b2185567afe49b9d0232f2e3e0cf58688bf5e5d87567635730a43211/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 06 15:28:24 compute-0 podman[185498]: 2026-01-06 15:28:24.642569414 +0000 UTC m=+0.127656951 container init 55b8ec43af1fcd596145fa6a59e91743fafd2328d1bbf0a28ef963ea14cfff22 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 06 15:28:24 compute-0 podman[185498]: 2026-01-06 15:28:24.656377438 +0000 UTC m=+0.141464915 container start 55b8ec43af1fcd596145fa6a59e91743fafd2328d1bbf0a28ef963ea14cfff22 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:28:24 compute-0 podman[185498]: nova_compute
Jan 06 15:28:24 compute-0 nova_compute[185513]: + sudo -E kolla_set_configs
Jan 06 15:28:24 compute-0 systemd[1]: Started nova_compute container.
Jan 06 15:28:24 compute-0 sudo[185448]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Validating config file
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Copying service configuration files
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Deleting /etc/ceph
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Creating directory /etc/ceph
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Setting permission for /etc/ceph
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Writing out command to execute
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 06 15:28:24 compute-0 nova_compute[185513]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 06 15:28:24 compute-0 nova_compute[185513]: ++ cat /run_command
Jan 06 15:28:24 compute-0 nova_compute[185513]: + CMD=nova-compute
Jan 06 15:28:24 compute-0 nova_compute[185513]: + ARGS=
Jan 06 15:28:24 compute-0 nova_compute[185513]: + sudo kolla_copy_cacerts
Jan 06 15:28:24 compute-0 nova_compute[185513]: + [[ ! -n '' ]]
Jan 06 15:28:24 compute-0 nova_compute[185513]: + . kolla_extend_start
Jan 06 15:28:24 compute-0 nova_compute[185513]: Running command: 'nova-compute'
Jan 06 15:28:24 compute-0 nova_compute[185513]: + echo 'Running command: '\''nova-compute'\'''
Jan 06 15:28:24 compute-0 nova_compute[185513]: + umask 0022
Jan 06 15:28:24 compute-0 nova_compute[185513]: + exec nova-compute
Jan 06 15:28:25 compute-0 sudo[185674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtvxafcydisppyyhamnndwtsgiqrjgla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713304.9351258-1287-162325163838721/AnsiballZ_podman_container.py'
Jan 06 15:28:25 compute-0 sudo[185674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:25 compute-0 python3.9[185676]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 06 15:28:25 compute-0 systemd[1]: Started libpod-conmon-2351265932d7792819f5b179d26e17c2ce46acad3a42f7cb724d51c8f28538d4.scope.
Jan 06 15:28:25 compute-0 systemd[1]: Started libcrun container.
Jan 06 15:28:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d6895e476b71134042962837ef4ecd3cc0dfd4ce3630fd142b9fdf3521eb97/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 06 15:28:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d6895e476b71134042962837ef4ecd3cc0dfd4ce3630fd142b9fdf3521eb97/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 06 15:28:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d6895e476b71134042962837ef4ecd3cc0dfd4ce3630fd142b9fdf3521eb97/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 06 15:28:25 compute-0 podman[185702]: 2026-01-06 15:28:25.849532079 +0000 UTC m=+0.160982549 container init 2351265932d7792819f5b179d26e17c2ce46acad3a42f7cb724d51c8f28538d4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 06 15:28:25 compute-0 podman[185702]: 2026-01-06 15:28:25.863950538 +0000 UTC m=+0.175400958 container start 2351265932d7792819f5b179d26e17c2ce46acad3a42f7cb724d51c8f28538d4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=nova_compute_init, tcib_managed=true)
Jan 06 15:28:25 compute-0 python3.9[185676]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 06 15:28:25 compute-0 nova_compute_init[185724]: INFO:nova_statedir:Applying nova statedir ownership
Jan 06 15:28:25 compute-0 nova_compute_init[185724]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 06 15:28:25 compute-0 nova_compute_init[185724]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 06 15:28:25 compute-0 nova_compute_init[185724]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 06 15:28:25 compute-0 nova_compute_init[185724]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 06 15:28:25 compute-0 nova_compute_init[185724]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 06 15:28:25 compute-0 nova_compute_init[185724]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 06 15:28:25 compute-0 nova_compute_init[185724]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 06 15:28:25 compute-0 nova_compute_init[185724]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 06 15:28:25 compute-0 nova_compute_init[185724]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 06 15:28:25 compute-0 nova_compute_init[185724]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 06 15:28:25 compute-0 nova_compute_init[185724]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 06 15:28:25 compute-0 nova_compute_init[185724]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 06 15:28:25 compute-0 nova_compute_init[185724]: INFO:nova_statedir:Nova statedir ownership complete
Jan 06 15:28:25 compute-0 systemd[1]: libpod-2351265932d7792819f5b179d26e17c2ce46acad3a42f7cb724d51c8f28538d4.scope: Deactivated successfully.
Jan 06 15:28:25 compute-0 podman[185743]: 2026-01-06 15:28:25.991858966 +0000 UTC m=+0.026232952 container died 2351265932d7792819f5b179d26e17c2ce46acad3a42f7cb724d51c8f28538d4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:28:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2351265932d7792819f5b179d26e17c2ce46acad3a42f7cb724d51c8f28538d4-userdata-shm.mount: Deactivated successfully.
Jan 06 15:28:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-67d6895e476b71134042962837ef4ecd3cc0dfd4ce3630fd142b9fdf3521eb97-merged.mount: Deactivated successfully.
Jan 06 15:28:26 compute-0 sudo[185674]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:26 compute-0 podman[185743]: 2026-01-06 15:28:26.030947485 +0000 UTC m=+0.065321471 container cleanup 2351265932d7792819f5b179d26e17c2ce46acad3a42f7cb724d51c8f28538d4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:28:26 compute-0 systemd[1]: libpod-conmon-2351265932d7792819f5b179d26e17c2ce46acad3a42f7cb724d51c8f28538d4.scope: Deactivated successfully.
Jan 06 15:28:26 compute-0 sshd-session[162459]: Connection closed by 192.168.122.30 port 40688
Jan 06 15:28:26 compute-0 sshd-session[162456]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:28:26 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Jan 06 15:28:26 compute-0 systemd[1]: session-24.scope: Consumed 1min 45.322s CPU time.
Jan 06 15:28:26 compute-0 systemd-logind[791]: Session 24 logged out. Waiting for processes to exit.
Jan 06 15:28:26 compute-0 systemd-logind[791]: Removed session 24.
Jan 06 15:28:26 compute-0 nova_compute[185513]: 2026-01-06 15:28:26.847 185517 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 06 15:28:26 compute-0 nova_compute[185513]: 2026-01-06 15:28:26.848 185517 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 06 15:28:26 compute-0 nova_compute[185513]: 2026-01-06 15:28:26.849 185517 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 06 15:28:26 compute-0 nova_compute[185513]: 2026-01-06 15:28:26.850 185517 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 06 15:28:26 compute-0 nova_compute[185513]: 2026-01-06 15:28:26.989 185517 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.012 185517 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.012 185517 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.459 185517 INFO nova.virt.driver [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.595 185517 INFO nova.compute.provider_config [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.904 185517 DEBUG oslo_concurrency.lockutils [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.904 185517 DEBUG oslo_concurrency.lockutils [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.904 185517 DEBUG oslo_concurrency.lockutils [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.905 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.905 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.906 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.906 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.906 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.906 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.906 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.907 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.907 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.907 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.907 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.908 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.908 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.908 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.908 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.908 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.909 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.909 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.909 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.909 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.910 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.910 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.910 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.910 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.910 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.911 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.911 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.911 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.911 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.912 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.912 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.912 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.912 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.913 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.913 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.913 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.913 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.914 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.914 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.914 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.914 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.915 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.915 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.915 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.915 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.915 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.916 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.916 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.916 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.916 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.916 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.917 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.917 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.917 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.917 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.917 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.918 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.918 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.918 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.918 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.918 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.919 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.919 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.919 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.919 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.919 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.920 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.920 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.920 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.920 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.920 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.921 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.921 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.921 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.921 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.921 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.922 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.922 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.922 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.922 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.923 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.923 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.923 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.923 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.924 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.924 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.924 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.924 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.925 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.925 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.925 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.925 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.926 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.926 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.926 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.926 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.927 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.927 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.927 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.927 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.928 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.928 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.928 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.928 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.929 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.929 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.929 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.929 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.930 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.930 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.930 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.930 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.931 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.931 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.931 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.931 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.932 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.932 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.933 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.933 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.933 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.934 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.934 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.934 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.935 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.935 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.935 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.935 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.935 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.936 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.936 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.936 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.936 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.936 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.937 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.937 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.937 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.937 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.937 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.938 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.938 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.938 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.938 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.938 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.939 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.939 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.939 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.939 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.939 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.940 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.940 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.940 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.940 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.941 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.941 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.941 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.942 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.942 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.942 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.942 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.943 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.943 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.943 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.943 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.943 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.944 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.944 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.944 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.944 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.944 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.945 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.945 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.945 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.945 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.946 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.946 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.946 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.946 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.946 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.947 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.947 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.947 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.947 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.948 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.948 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.948 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.949 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.949 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.949 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.949 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.950 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.950 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.950 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.950 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.951 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.951 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.951 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.951 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.951 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.952 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.952 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.952 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.952 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.952 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.953 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.953 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.953 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.953 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.953 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.954 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.954 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.954 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.954 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.954 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.955 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.955 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.955 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.955 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.955 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.956 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.956 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.956 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.956 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.956 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.957 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.957 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.957 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.957 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.957 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.958 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.958 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.958 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.958 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.958 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.959 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.959 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.959 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.959 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.959 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.960 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.960 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.960 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.960 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.960 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.961 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.961 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.961 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.961 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.961 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.962 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.962 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.962 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.962 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.962 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.963 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.963 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.963 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.963 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.963 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.964 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.964 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.964 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.964 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.964 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.965 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.965 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.965 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.965 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.965 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.966 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.966 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.966 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.966 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.966 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.967 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.967 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.967 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.967 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.967 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.968 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.968 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.968 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.968 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.968 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.968 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.969 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.969 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.969 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.969 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.969 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.970 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.970 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.970 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.970 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.970 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.971 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.971 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.971 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.971 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.971 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.971 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.971 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.971 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.972 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.972 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.972 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.972 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.972 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.972 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.972 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.973 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.973 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.973 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.973 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.973 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.973 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.973 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.974 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.974 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.974 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.974 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.974 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.974 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.974 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.975 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.975 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.975 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.975 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.975 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.975 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.975 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.976 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.976 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.976 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.976 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.976 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.976 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.976 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.977 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.977 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.977 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.977 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.977 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.977 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.977 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.978 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.978 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.978 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.978 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.978 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.978 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.978 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.978 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.979 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.979 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.979 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.979 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.979 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.980 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.980 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.980 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.980 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.980 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.980 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.980 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.981 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.981 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.981 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.981 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.981 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.981 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.981 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.981 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.982 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.982 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.982 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.982 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.982 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.982 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.982 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.983 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.983 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.983 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.983 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.983 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.983 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.983 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.984 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.984 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.984 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.984 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.984 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.984 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.984 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.985 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.985 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.985 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.985 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.985 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.985 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.985 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.985 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.986 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.986 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.986 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.986 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.986 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.986 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.986 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.987 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.987 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.987 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.987 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.987 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.987 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.987 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.988 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.988 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.988 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.988 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.988 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.988 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.988 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.988 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.989 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.989 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.989 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.989 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.989 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.989 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.989 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.990 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.990 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.990 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.990 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.990 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.990 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.991 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.991 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.991 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.991 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.991 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.991 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.991 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.991 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.992 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.992 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.992 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.992 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.992 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.992 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.993 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.993 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.993 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.993 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.993 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.993 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.994 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.994 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.994 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.994 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.994 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.994 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.994 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.995 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.995 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.995 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.995 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.995 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.995 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.996 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.996 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.996 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.996 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.996 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.996 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.996 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.997 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.997 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.997 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.997 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.997 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.997 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.998 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.998 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.998 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.998 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.998 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.998 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.998 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.999 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.999 185517 WARNING oslo_config.cfg [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 06 15:28:27 compute-0 nova_compute[185513]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 06 15:28:27 compute-0 nova_compute[185513]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 06 15:28:27 compute-0 nova_compute[185513]: and ``live_migration_inbound_addr`` respectively.
Jan 06 15:28:27 compute-0 nova_compute[185513]: ).  Its value may be silently ignored in the future.
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.999 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.999 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:27 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.999 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:27.999 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.000 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.000 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.000 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.000 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.000 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.001 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.001 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.001 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.001 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.001 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.002 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.002 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.002 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.002 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.002 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.002 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.002 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.003 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.003 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.003 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.003 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.003 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.003 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.003 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.004 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.004 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.004 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.004 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.004 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.004 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.005 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.005 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.005 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.005 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.005 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.005 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.005 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.006 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.006 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.006 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.006 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.006 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.006 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.007 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.007 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.007 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.007 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.007 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.008 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.008 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.008 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.008 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.008 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.009 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.009 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.009 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.009 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.009 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.009 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.010 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.010 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.010 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.010 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.010 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.010 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.011 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.011 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.011 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.011 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.011 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.011 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.011 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.012 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.012 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.012 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.012 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.012 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.012 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.013 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.013 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.013 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.013 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.014 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.014 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.014 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.014 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.014 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.015 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.015 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.015 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.015 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.015 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.015 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.015 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.016 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.016 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.016 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.016 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.016 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.016 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.016 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.016 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.017 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.017 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.017 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.017 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.017 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.017 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.017 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.018 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.018 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.018 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.018 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.018 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.018 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.018 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.019 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.019 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.019 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.019 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.019 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.019 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.019 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.020 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.020 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.020 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.020 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.020 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.020 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.020 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.021 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.021 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.021 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.021 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.021 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.021 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.022 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.022 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.022 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.022 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.022 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.022 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.022 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.023 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.023 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.023 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.023 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.023 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.023 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.023 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.024 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.024 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.024 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.024 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.024 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.024 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.024 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.025 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.025 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.025 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.025 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.025 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.025 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.025 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.026 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.026 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.026 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.026 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.026 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.026 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.026 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.027 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.027 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.027 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.027 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.027 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.027 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.028 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.028 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.028 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.028 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.028 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.028 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.028 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.029 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.029 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.029 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.029 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.029 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.029 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.029 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.030 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.030 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.030 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.030 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.030 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.030 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.031 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.031 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.031 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.031 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.031 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.031 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.031 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.032 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.032 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.032 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.032 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.032 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.032 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.032 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.033 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.033 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.033 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.033 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.033 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.033 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.033 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.034 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.034 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.034 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.034 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.034 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.034 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.034 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.035 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.035 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.035 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.035 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.035 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.035 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.035 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.035 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.036 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.036 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.036 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.036 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.036 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.036 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.036 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.037 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.037 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.037 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.038 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.038 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.038 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.038 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.038 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.038 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.039 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.039 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.039 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.039 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.039 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.039 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.039 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.040 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.040 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.040 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.040 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.040 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.040 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.040 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.041 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.041 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.041 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.041 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.041 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.041 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.041 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.042 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.042 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.042 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.042 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.042 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.042 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.042 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.043 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.043 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.043 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.043 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.043 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.043 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.043 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.044 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.044 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.044 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.044 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.044 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.044 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.044 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.045 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.045 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.045 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.045 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.045 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.045 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.045 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.046 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.046 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.046 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.046 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.046 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.046 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.046 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.047 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.047 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.047 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.047 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.047 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.047 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.047 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.048 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.048 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.048 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.048 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.048 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.048 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.048 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.049 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.049 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.049 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.049 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.049 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.049 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.049 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.050 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.050 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.050 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.050 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.050 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.050 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.050 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.051 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.051 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.051 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.051 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.051 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.051 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.051 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.052 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.052 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.052 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.052 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.052 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.052 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.053 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.053 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.053 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.053 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.053 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.053 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.053 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.054 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.054 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.054 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.054 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.054 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.054 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.054 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.054 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.055 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.055 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.055 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.055 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.055 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.055 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.055 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.056 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.056 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.056 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.056 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.056 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.056 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.056 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.057 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.057 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.057 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.057 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.057 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.057 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.058 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.058 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.058 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.058 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.058 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.058 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.058 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.059 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.059 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.059 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.059 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.059 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.059 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.059 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.060 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.060 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.060 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.060 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.060 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.060 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.060 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.060 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.061 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.061 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.061 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.061 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.061 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.061 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.062 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.062 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.062 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.062 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.062 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.062 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.062 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.063 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.063 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.063 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.063 185517 DEBUG oslo_service.service [None req-84879634-6bad-4ee1-92b7-016191e1b1ac - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.064 185517 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.089 185517 DEBUG nova.virt.libvirt.host [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.090 185517 DEBUG nova.virt.libvirt.host [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.090 185517 DEBUG nova.virt.libvirt.host [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.091 185517 DEBUG nova.virt.libvirt.host [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.104 185517 DEBUG nova.virt.libvirt.host [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f2ce6efe070> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.107 185517 DEBUG nova.virt.libvirt.host [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f2ce6efe070> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.108 185517 INFO nova.virt.libvirt.driver [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Connection event '1' reason 'None'
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.116 185517 INFO nova.virt.libvirt.host [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Libvirt host capabilities <capabilities>
Jan 06 15:28:28 compute-0 nova_compute[185513]: 
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <host>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <uuid>f243d16a-3dea-407f-9cc3-41cda7bb8d99</uuid>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <cpu>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <arch>x86_64</arch>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model>EPYC-Rome-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <vendor>AMD</vendor>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <microcode version='16777317'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <signature family='23' model='49' stepping='0'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='x2apic'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='tsc-deadline'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='osxsave'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='hypervisor'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='tsc_adjust'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='spec-ctrl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='stibp'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='arch-capabilities'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='ssbd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='cmp_legacy'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='topoext'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='virt-ssbd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='lbrv'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='tsc-scale'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='vmcb-clean'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='pause-filter'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='pfthreshold'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='svme-addr-chk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='rdctl-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='skip-l1dfl-vmentry'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='mds-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature name='pschange-mc-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <pages unit='KiB' size='4'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <pages unit='KiB' size='2048'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <pages unit='KiB' size='1048576'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </cpu>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <power_management>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <suspend_mem/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <suspend_disk/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <suspend_hybrid/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </power_management>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <iommu support='no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <migration_features>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <live/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <uri_transports>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <uri_transport>tcp</uri_transport>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <uri_transport>rdma</uri_transport>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </uri_transports>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </migration_features>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <topology>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <cells num='1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <cell id='0'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:           <memory unit='KiB'>7864312</memory>
Jan 06 15:28:28 compute-0 nova_compute[185513]:           <pages unit='KiB' size='4'>1966078</pages>
Jan 06 15:28:28 compute-0 nova_compute[185513]:           <pages unit='KiB' size='2048'>0</pages>
Jan 06 15:28:28 compute-0 nova_compute[185513]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 06 15:28:28 compute-0 nova_compute[185513]:           <distances>
Jan 06 15:28:28 compute-0 nova_compute[185513]:             <sibling id='0' value='10'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:           </distances>
Jan 06 15:28:28 compute-0 nova_compute[185513]:           <cpus num='8'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:           </cpus>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         </cell>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </cells>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </topology>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <cache>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </cache>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <secmodel>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model>selinux</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <doi>0</doi>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </secmodel>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <secmodel>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model>dac</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <doi>0</doi>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </secmodel>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </host>
Jan 06 15:28:28 compute-0 nova_compute[185513]: 
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <guest>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <os_type>hvm</os_type>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <arch name='i686'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <wordsize>32</wordsize>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <domain type='qemu'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <domain type='kvm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </arch>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <features>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <pae/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <nonpae/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <acpi default='on' toggle='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <apic default='on' toggle='no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <cpuselection/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <deviceboot/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <disksnapshot default='on' toggle='no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <externalSnapshot/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </features>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </guest>
Jan 06 15:28:28 compute-0 nova_compute[185513]: 
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <guest>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <os_type>hvm</os_type>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <arch name='x86_64'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <wordsize>64</wordsize>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <domain type='qemu'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <domain type='kvm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </arch>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <features>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <acpi default='on' toggle='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <apic default='on' toggle='no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <cpuselection/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <deviceboot/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <disksnapshot default='on' toggle='no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <externalSnapshot/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </features>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </guest>
Jan 06 15:28:28 compute-0 nova_compute[185513]: 
Jan 06 15:28:28 compute-0 nova_compute[185513]: </capabilities>
Jan 06 15:28:28 compute-0 nova_compute[185513]: 
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.123 185517 DEBUG nova.virt.libvirt.host [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.128 185517 DEBUG nova.virt.libvirt.host [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 06 15:28:28 compute-0 nova_compute[185513]: <domainCapabilities>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <path>/usr/libexec/qemu-kvm</path>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <domain>kvm</domain>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <arch>i686</arch>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <vcpu max='4096'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <iothreads supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <os supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <enum name='firmware'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <loader supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='type'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>rom</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>pflash</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='readonly'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>yes</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>no</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='secure'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>no</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </loader>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </os>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <cpu>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <mode name='host-passthrough' supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='hostPassthroughMigratable'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>on</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>off</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </mode>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <mode name='maximum' supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='maximumMigratable'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>on</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>off</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </mode>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <mode name='host-model' supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <vendor>AMD</vendor>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='x2apic'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='tsc-deadline'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='hypervisor'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='tsc_adjust'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='spec-ctrl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='stibp'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='ssbd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='cmp_legacy'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='overflow-recov'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='succor'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='ibrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='amd-ssbd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='virt-ssbd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='lbrv'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='tsc-scale'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='vmcb-clean'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='flushbyasid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='pause-filter'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='pfthreshold'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='svme-addr-chk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='disable' name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </mode>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <mode name='custom' supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-noTSX'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-v5'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cooperlake'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cooperlake-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cooperlake-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Denverton'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mpx'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Denverton-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mpx'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Denverton-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Denverton-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Dhyana-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Genoa'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amd-psfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='auto-ibrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='stibp-always-on'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Genoa-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amd-psfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='auto-ibrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='stibp-always-on'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Milan'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Milan-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Milan-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amd-psfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='stibp-always-on'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Rome'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Rome-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Rome-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Rome-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='GraniteRapids'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mcdt-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pbrsb-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='prefetchiti'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='GraniteRapids-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mcdt-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pbrsb-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='prefetchiti'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='GraniteRapids-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx10'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx10-128'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx10-256'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx10-512'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mcdt-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pbrsb-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='prefetchiti'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-noTSX'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-noTSX'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v5'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v6'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v7'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='IvyBridge'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='IvyBridge-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='IvyBridge-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='IvyBridge-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='KnightsMill'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-4fmaps'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-4vnniw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512er'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512pf'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='KnightsMill-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-4fmaps'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-4vnniw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512er'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512pf'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Opteron_G4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fma4'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xop'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Opteron_G4-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fma4'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xop'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Opteron_G5'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fma4'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tbm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xop'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Opteron_G5-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fma4'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tbm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xop'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SapphireRapids'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SapphireRapids-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SapphireRapids-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SapphireRapids-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SierraForest'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-ne-convert'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cmpccxadd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mcdt-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pbrsb-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SierraForest-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-ne-convert'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cmpccxadd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mcdt-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pbrsb-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-v5'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Snowridge'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='core-capability'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mpx'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='split-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Snowridge-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='core-capability'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mpx'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='split-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Snowridge-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='core-capability'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='split-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Snowridge-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='core-capability'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='split-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Snowridge-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='athlon'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnow'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnowext'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='athlon-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnow'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnowext'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='core2duo'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='core2duo-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='coreduo'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='coreduo-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='n270'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='n270-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='phenom'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnow'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnowext'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='phenom-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnow'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnowext'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </mode>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </cpu>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <memoryBacking supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <enum name='sourceType'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <value>file</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <value>anonymous</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <value>memfd</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </memoryBacking>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <devices>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <disk supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='diskDevice'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>disk</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>cdrom</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>floppy</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>lun</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='bus'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>fdc</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>scsi</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>usb</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>sata</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='model'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio-transitional</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio-non-transitional</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </disk>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <graphics supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='type'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vnc</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>egl-headless</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>dbus</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </graphics>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <video supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='modelType'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vga</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>cirrus</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>none</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>bochs</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>ramfb</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </video>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <hostdev supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='mode'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>subsystem</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='startupPolicy'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>default</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>mandatory</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>requisite</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>optional</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='subsysType'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>usb</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>pci</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>scsi</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='capsType'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='pciBackend'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </hostdev>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <rng supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='model'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio-transitional</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio-non-transitional</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='backendModel'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>random</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>egd</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>builtin</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </rng>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <filesystem supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='driverType'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>path</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>handle</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtiofs</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </filesystem>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <tpm supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='model'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>tpm-tis</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>tpm-crb</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='backendModel'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>emulator</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>external</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='backendVersion'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>2.0</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </tpm>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <redirdev supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='bus'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>usb</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </redirdev>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <channel supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='type'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>pty</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>unix</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </channel>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <crypto supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='model'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='type'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>qemu</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='backendModel'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>builtin</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </crypto>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <interface supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='backendType'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>default</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>passt</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </interface>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <panic supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='model'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>isa</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>hyperv</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </panic>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <console supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='type'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>null</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vc</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>pty</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>dev</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>file</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>pipe</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>stdio</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>udp</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>tcp</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>unix</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>qemu-vdagent</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>dbus</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </console>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </devices>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <features>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <gic supported='no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <vmcoreinfo supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <genid supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <backingStoreInput supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <backup supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <async-teardown supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <ps2 supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <sev supported='no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <sgx supported='no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <hyperv supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='features'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>relaxed</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vapic</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>spinlocks</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vpindex</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>runtime</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>synic</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>stimer</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>reset</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vendor_id</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>frequencies</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>reenlightenment</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>tlbflush</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>ipi</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>avic</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>emsr_bitmap</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>xmm_input</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <defaults>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <spinlocks>4095</spinlocks>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <stimer_direct>on</stimer_direct>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <tlbflush_direct>on</tlbflush_direct>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <tlbflush_extended>on</tlbflush_extended>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </defaults>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </hyperv>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <launchSecurity supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='sectype'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>tdx</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </launchSecurity>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </features>
Jan 06 15:28:28 compute-0 nova_compute[185513]: </domainCapabilities>
Jan 06 15:28:28 compute-0 nova_compute[185513]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.137 185517 WARNING nova.virt.libvirt.driver [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.137 185517 DEBUG nova.virt.libvirt.volume.mount [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.138 185517 DEBUG nova.virt.libvirt.host [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 06 15:28:28 compute-0 nova_compute[185513]: <domainCapabilities>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <path>/usr/libexec/qemu-kvm</path>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <domain>kvm</domain>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <arch>i686</arch>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <vcpu max='240'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <iothreads supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <os supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <enum name='firmware'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <loader supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='type'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>rom</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>pflash</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='readonly'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>yes</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>no</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='secure'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>no</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </loader>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </os>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <cpu>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <mode name='host-passthrough' supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='hostPassthroughMigratable'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>on</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>off</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </mode>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <mode name='maximum' supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='maximumMigratable'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>on</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>off</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </mode>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <mode name='host-model' supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <vendor>AMD</vendor>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='x2apic'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='tsc-deadline'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='hypervisor'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='tsc_adjust'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='spec-ctrl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='stibp'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='ssbd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='cmp_legacy'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='overflow-recov'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='succor'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='ibrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='amd-ssbd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='virt-ssbd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='lbrv'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='tsc-scale'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='vmcb-clean'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='flushbyasid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='pause-filter'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='pfthreshold'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='svme-addr-chk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='disable' name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </mode>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <mode name='custom' supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-noTSX'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-v5'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cooperlake'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cooperlake-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cooperlake-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Denverton'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mpx'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Denverton-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mpx'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Denverton-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Denverton-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Dhyana-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Genoa'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amd-psfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='auto-ibrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='stibp-always-on'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Genoa-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amd-psfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='auto-ibrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='stibp-always-on'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Milan'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Milan-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Milan-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amd-psfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='stibp-always-on'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Rome'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Rome-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Rome-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Rome-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='GraniteRapids'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mcdt-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pbrsb-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='prefetchiti'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='GraniteRapids-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mcdt-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pbrsb-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='prefetchiti'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='GraniteRapids-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx10'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx10-128'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx10-256'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx10-512'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mcdt-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pbrsb-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='prefetchiti'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-noTSX'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-noTSX'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v5'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v6'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v7'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='IvyBridge'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='IvyBridge-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='IvyBridge-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='IvyBridge-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='KnightsMill'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-4fmaps'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-4vnniw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512er'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512pf'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='KnightsMill-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-4fmaps'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-4vnniw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512er'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512pf'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Opteron_G4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fma4'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xop'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Opteron_G4-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fma4'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xop'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Opteron_G5'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fma4'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tbm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xop'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Opteron_G5-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fma4'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tbm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xop'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SapphireRapids'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SapphireRapids-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SapphireRapids-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SapphireRapids-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SierraForest'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-ne-convert'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cmpccxadd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mcdt-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pbrsb-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SierraForest-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-ne-convert'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cmpccxadd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mcdt-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pbrsb-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-v5'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Snowridge'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='core-capability'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mpx'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='split-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Snowridge-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='core-capability'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mpx'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='split-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Snowridge-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='core-capability'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='split-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Snowridge-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='core-capability'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='split-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Snowridge-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='athlon'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnow'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnowext'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='athlon-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnow'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnowext'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='core2duo'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='core2duo-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='coreduo'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='coreduo-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='n270'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='n270-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='phenom'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnow'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnowext'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='phenom-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnow'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnowext'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </mode>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </cpu>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <memoryBacking supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <enum name='sourceType'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <value>file</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <value>anonymous</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <value>memfd</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </memoryBacking>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <devices>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <disk supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='diskDevice'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>disk</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>cdrom</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>floppy</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>lun</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='bus'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>ide</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>fdc</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>scsi</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>usb</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>sata</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='model'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio-transitional</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio-non-transitional</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </disk>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <graphics supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='type'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vnc</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>egl-headless</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>dbus</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </graphics>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <video supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='modelType'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vga</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>cirrus</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>none</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>bochs</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>ramfb</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </video>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <hostdev supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='mode'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>subsystem</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='startupPolicy'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>default</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>mandatory</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>requisite</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>optional</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='subsysType'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>usb</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>pci</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>scsi</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='capsType'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='pciBackend'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </hostdev>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <rng supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='model'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio-transitional</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio-non-transitional</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='backendModel'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>random</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>egd</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>builtin</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </rng>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <filesystem supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='driverType'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>path</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>handle</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtiofs</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </filesystem>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <tpm supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='model'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>tpm-tis</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>tpm-crb</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='backendModel'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>emulator</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>external</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='backendVersion'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>2.0</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </tpm>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <redirdev supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='bus'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>usb</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </redirdev>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <channel supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='type'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>pty</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>unix</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </channel>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <crypto supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='model'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='type'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>qemu</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='backendModel'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>builtin</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </crypto>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <interface supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='backendType'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>default</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>passt</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </interface>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <panic supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='model'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>isa</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>hyperv</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </panic>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <console supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='type'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>null</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vc</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>pty</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>dev</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>file</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>pipe</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>stdio</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>udp</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>tcp</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>unix</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>qemu-vdagent</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>dbus</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </console>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </devices>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <features>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <gic supported='no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <vmcoreinfo supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <genid supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <backingStoreInput supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <backup supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <async-teardown supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <ps2 supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <sev supported='no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <sgx supported='no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <hyperv supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='features'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>relaxed</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vapic</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>spinlocks</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vpindex</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>runtime</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>synic</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>stimer</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>reset</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vendor_id</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>frequencies</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>reenlightenment</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>tlbflush</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>ipi</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>avic</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>emsr_bitmap</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>xmm_input</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <defaults>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <spinlocks>4095</spinlocks>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <stimer_direct>on</stimer_direct>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <tlbflush_direct>on</tlbflush_direct>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <tlbflush_extended>on</tlbflush_extended>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </defaults>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </hyperv>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <launchSecurity supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='sectype'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>tdx</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </launchSecurity>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </features>
Jan 06 15:28:28 compute-0 nova_compute[185513]: </domainCapabilities>
Jan 06 15:28:28 compute-0 nova_compute[185513]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.185 185517 DEBUG nova.virt.libvirt.host [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.192 185517 DEBUG nova.virt.libvirt.host [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 06 15:28:28 compute-0 nova_compute[185513]: <domainCapabilities>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <path>/usr/libexec/qemu-kvm</path>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <domain>kvm</domain>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <arch>x86_64</arch>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <vcpu max='4096'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <iothreads supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <os supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <enum name='firmware'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <value>efi</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <loader supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='type'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>rom</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>pflash</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='readonly'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>yes</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>no</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='secure'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>yes</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>no</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </loader>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </os>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <cpu>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <mode name='host-passthrough' supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='hostPassthroughMigratable'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>on</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>off</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </mode>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <mode name='maximum' supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='maximumMigratable'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>on</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>off</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </mode>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <mode name='host-model' supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <vendor>AMD</vendor>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='x2apic'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='tsc-deadline'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='hypervisor'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='tsc_adjust'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='spec-ctrl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='stibp'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='ssbd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='cmp_legacy'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='overflow-recov'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='succor'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='ibrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='amd-ssbd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='virt-ssbd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='lbrv'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='tsc-scale'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='vmcb-clean'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='flushbyasid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='pause-filter'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='pfthreshold'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='svme-addr-chk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='disable' name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </mode>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <mode name='custom' supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-noTSX'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-v5'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cooperlake'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cooperlake-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cooperlake-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Denverton'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mpx'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Denverton-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mpx'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Denverton-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Denverton-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Dhyana-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Genoa'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amd-psfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='auto-ibrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='stibp-always-on'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Genoa-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amd-psfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='auto-ibrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='stibp-always-on'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Milan'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Milan-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Milan-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amd-psfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='stibp-always-on'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Rome'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Rome-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Rome-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Rome-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='GraniteRapids'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mcdt-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pbrsb-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='prefetchiti'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='GraniteRapids-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mcdt-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pbrsb-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='prefetchiti'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='GraniteRapids-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx10'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx10-128'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx10-256'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx10-512'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mcdt-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pbrsb-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='prefetchiti'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-noTSX'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-noTSX'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v5'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v6'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v7'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='IvyBridge'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='IvyBridge-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='IvyBridge-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='IvyBridge-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='KnightsMill'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-4fmaps'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-4vnniw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512er'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512pf'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='KnightsMill-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-4fmaps'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-4vnniw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512er'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512pf'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Opteron_G4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fma4'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xop'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Opteron_G4-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fma4'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xop'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Opteron_G5'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fma4'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tbm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xop'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Opteron_G5-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fma4'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tbm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xop'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SapphireRapids'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SapphireRapids-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SapphireRapids-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SapphireRapids-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SierraForest'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-ne-convert'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cmpccxadd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mcdt-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pbrsb-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SierraForest-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-ne-convert'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cmpccxadd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mcdt-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pbrsb-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-v5'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Snowridge'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='core-capability'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mpx'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='split-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Snowridge-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='core-capability'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mpx'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='split-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Snowridge-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='core-capability'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='split-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Snowridge-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='core-capability'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='split-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Snowridge-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='athlon'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnow'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnowext'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='athlon-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnow'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnowext'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='core2duo'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='core2duo-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='coreduo'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='coreduo-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='n270'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='n270-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='phenom'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnow'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnowext'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='phenom-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnow'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnowext'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </mode>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </cpu>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <memoryBacking supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <enum name='sourceType'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <value>file</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <value>anonymous</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <value>memfd</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </memoryBacking>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <devices>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <disk supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='diskDevice'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>disk</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>cdrom</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>floppy</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>lun</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='bus'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>fdc</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>scsi</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>usb</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>sata</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='model'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio-transitional</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio-non-transitional</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </disk>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <graphics supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='type'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vnc</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>egl-headless</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>dbus</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </graphics>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <video supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='modelType'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vga</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>cirrus</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>none</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>bochs</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>ramfb</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </video>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <hostdev supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='mode'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>subsystem</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='startupPolicy'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>default</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>mandatory</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>requisite</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>optional</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='subsysType'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>usb</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>pci</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>scsi</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='capsType'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='pciBackend'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </hostdev>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <rng supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='model'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio-transitional</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio-non-transitional</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='backendModel'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>random</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>egd</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>builtin</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </rng>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <filesystem supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='driverType'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>path</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>handle</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtiofs</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </filesystem>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <tpm supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='model'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>tpm-tis</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>tpm-crb</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='backendModel'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>emulator</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>external</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='backendVersion'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>2.0</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </tpm>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <redirdev supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='bus'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>usb</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </redirdev>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <channel supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='type'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>pty</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>unix</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </channel>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <crypto supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='model'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='type'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>qemu</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='backendModel'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>builtin</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </crypto>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <interface supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='backendType'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>default</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>passt</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </interface>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <panic supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='model'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>isa</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>hyperv</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </panic>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <console supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='type'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>null</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vc</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>pty</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>dev</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>file</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>pipe</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>stdio</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>udp</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>tcp</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>unix</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>qemu-vdagent</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>dbus</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </console>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </devices>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <features>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <gic supported='no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <vmcoreinfo supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <genid supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <backingStoreInput supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <backup supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <async-teardown supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <ps2 supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <sev supported='no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <sgx supported='no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <hyperv supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='features'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>relaxed</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vapic</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>spinlocks</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vpindex</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>runtime</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>synic</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>stimer</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>reset</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vendor_id</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>frequencies</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>reenlightenment</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>tlbflush</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>ipi</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>avic</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>emsr_bitmap</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>xmm_input</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <defaults>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <spinlocks>4095</spinlocks>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <stimer_direct>on</stimer_direct>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <tlbflush_direct>on</tlbflush_direct>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <tlbflush_extended>on</tlbflush_extended>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </defaults>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </hyperv>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <launchSecurity supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='sectype'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>tdx</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </launchSecurity>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </features>
Jan 06 15:28:28 compute-0 nova_compute[185513]: </domainCapabilities>
Jan 06 15:28:28 compute-0 nova_compute[185513]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.252 185517 DEBUG nova.virt.libvirt.host [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 06 15:28:28 compute-0 nova_compute[185513]: <domainCapabilities>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <path>/usr/libexec/qemu-kvm</path>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <domain>kvm</domain>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <arch>x86_64</arch>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <vcpu max='240'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <iothreads supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <os supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <enum name='firmware'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <loader supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='type'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>rom</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>pflash</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='readonly'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>yes</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>no</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='secure'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>no</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </loader>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </os>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <cpu>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <mode name='host-passthrough' supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='hostPassthroughMigratable'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>on</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>off</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </mode>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <mode name='maximum' supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='maximumMigratable'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>on</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>off</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </mode>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <mode name='host-model' supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <vendor>AMD</vendor>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='x2apic'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='tsc-deadline'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='hypervisor'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='tsc_adjust'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='spec-ctrl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='stibp'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='ssbd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='cmp_legacy'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='overflow-recov'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='succor'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='ibrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='amd-ssbd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='virt-ssbd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='lbrv'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='tsc-scale'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='vmcb-clean'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='flushbyasid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='pause-filter'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='pfthreshold'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='svme-addr-chk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <feature policy='disable' name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </mode>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <mode name='custom' supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-noTSX'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Broadwell-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cascadelake-Server-v5'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cooperlake'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cooperlake-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Cooperlake-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Denverton'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mpx'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Denverton-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mpx'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Denverton-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Denverton-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Dhyana-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Genoa'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amd-psfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='auto-ibrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='stibp-always-on'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Genoa-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amd-psfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='auto-ibrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='stibp-always-on'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Milan'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Milan-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Milan-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amd-psfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='no-nested-data-bp'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='null-sel-clr-base'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='stibp-always-on'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Rome'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Rome-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Rome-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-Rome-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='EPYC-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='GraniteRapids'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mcdt-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pbrsb-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='prefetchiti'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='GraniteRapids-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mcdt-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pbrsb-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='prefetchiti'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='GraniteRapids-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx10'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx10-128'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx10-256'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx10-512'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mcdt-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pbrsb-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='prefetchiti'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-noTSX'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Haswell-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-noTSX'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v5'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v6'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Icelake-Server-v7'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='IvyBridge'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='IvyBridge-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='IvyBridge-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='IvyBridge-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='KnightsMill'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-4fmaps'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-4vnniw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512er'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512pf'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='KnightsMill-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-4fmaps'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-4vnniw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512er'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512pf'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Opteron_G4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fma4'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xop'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Opteron_G4-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fma4'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xop'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Opteron_G5'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fma4'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tbm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xop'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Opteron_G5-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fma4'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tbm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xop'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SapphireRapids'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SapphireRapids-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SapphireRapids-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SapphireRapids-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='amx-tile'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-bf16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-fp16'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512-vpopcntdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bitalg'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vbmi2'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrc'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fzrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='la57'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='taa-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='tsx-ldtrk'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xfd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SierraForest'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-ne-convert'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cmpccxadd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mcdt-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pbrsb-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='SierraForest-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-ifma'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-ne-convert'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx-vnni-int8'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='bus-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cmpccxadd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fbsdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='fsrs'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ibrs-all'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mcdt-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pbrsb-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='psdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='sbdr-ssdp-no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='serialize'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vaes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='vpclmulqdq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Client-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='hle'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='rtm'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Skylake-Server-v5'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512bw'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512cd'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512dq'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512f'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='avx512vl'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='invpcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pcid'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='pku'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Snowridge'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='core-capability'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mpx'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='split-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Snowridge-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='core-capability'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='mpx'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='split-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Snowridge-v2'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='core-capability'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='split-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Snowridge-v3'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='core-capability'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='split-lock-detect'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='Snowridge-v4'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='cldemote'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='erms'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='gfni'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdir64b'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='movdiri'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='xsaves'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='athlon'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnow'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnowext'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='athlon-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnow'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnowext'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='core2duo'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='core2duo-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='coreduo'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='coreduo-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='n270'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='n270-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='ss'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='phenom'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnow'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnowext'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <blockers model='phenom-v1'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnow'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <feature name='3dnowext'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </blockers>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </mode>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </cpu>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <memoryBacking supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <enum name='sourceType'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <value>file</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <value>anonymous</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <value>memfd</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </memoryBacking>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <devices>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <disk supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='diskDevice'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>disk</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>cdrom</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>floppy</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>lun</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='bus'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>ide</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>fdc</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>scsi</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>usb</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>sata</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='model'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio-transitional</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio-non-transitional</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </disk>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <graphics supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='type'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vnc</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>egl-headless</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>dbus</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </graphics>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <video supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='modelType'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vga</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>cirrus</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>none</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>bochs</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>ramfb</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </video>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <hostdev supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='mode'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>subsystem</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='startupPolicy'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>default</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>mandatory</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>requisite</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>optional</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='subsysType'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>usb</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>pci</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>scsi</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='capsType'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='pciBackend'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </hostdev>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <rng supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='model'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio-transitional</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtio-non-transitional</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='backendModel'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>random</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>egd</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>builtin</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </rng>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <filesystem supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='driverType'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>path</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>handle</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>virtiofs</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </filesystem>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <tpm supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='model'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>tpm-tis</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>tpm-crb</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='backendModel'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>emulator</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>external</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='backendVersion'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>2.0</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </tpm>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <redirdev supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='bus'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>usb</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </redirdev>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <channel supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='type'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>pty</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>unix</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </channel>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <crypto supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='model'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='type'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>qemu</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='backendModel'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>builtin</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </crypto>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <interface supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='backendType'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>default</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>passt</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </interface>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <panic supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='model'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>isa</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>hyperv</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </panic>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <console supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='type'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>null</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vc</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>pty</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>dev</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>file</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>pipe</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>stdio</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>udp</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>tcp</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>unix</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>qemu-vdagent</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>dbus</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </console>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </devices>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   <features>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <gic supported='no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <vmcoreinfo supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <genid supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <backingStoreInput supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <backup supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <async-teardown supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <ps2 supported='yes'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <sev supported='no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <sgx supported='no'/>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <hyperv supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='features'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>relaxed</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vapic</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>spinlocks</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vpindex</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>runtime</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>synic</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>stimer</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>reset</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>vendor_id</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>frequencies</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>reenlightenment</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>tlbflush</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>ipi</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>avic</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>emsr_bitmap</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>xmm_input</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <defaults>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <spinlocks>4095</spinlocks>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <stimer_direct>on</stimer_direct>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <tlbflush_direct>on</tlbflush_direct>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <tlbflush_extended>on</tlbflush_extended>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </defaults>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </hyperv>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     <launchSecurity supported='yes'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       <enum name='sectype'>
Jan 06 15:28:28 compute-0 nova_compute[185513]:         <value>tdx</value>
Jan 06 15:28:28 compute-0 nova_compute[185513]:       </enum>
Jan 06 15:28:28 compute-0 nova_compute[185513]:     </launchSecurity>
Jan 06 15:28:28 compute-0 nova_compute[185513]:   </features>
Jan 06 15:28:28 compute-0 nova_compute[185513]: </domainCapabilities>
Jan 06 15:28:28 compute-0 nova_compute[185513]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.313 185517 DEBUG nova.virt.libvirt.host [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.314 185517 INFO nova.virt.libvirt.host [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Secure Boot support detected
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.316 185517 INFO nova.virt.libvirt.driver [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.316 185517 INFO nova.virt.libvirt.driver [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.325 185517 DEBUG nova.virt.libvirt.driver [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.347 185517 INFO nova.virt.node [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Determined node identity 6e7a5a7f-91c3-4b82-b43d-f32569e61608 from /var/lib/nova/compute_id
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.365 185517 WARNING nova.compute.manager [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Compute nodes ['6e7a5a7f-91c3-4b82-b43d-f32569e61608'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.404 185517 INFO nova.compute.manager [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.443 185517 WARNING nova.compute.manager [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.444 185517 DEBUG oslo_concurrency.lockutils [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.444 185517 DEBUG oslo_concurrency.lockutils [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.445 185517 DEBUG oslo_concurrency.lockutils [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.445 185517 DEBUG nova.compute.resource_tracker [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:28:28 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 06 15:28:28 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.763 185517 WARNING nova.virt.libvirt.driver [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.765 185517 DEBUG nova.compute.resource_tracker [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6054MB free_disk=72.65301132202148GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.765 185517 DEBUG oslo_concurrency.lockutils [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.765 185517 DEBUG oslo_concurrency.lockutils [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.793 185517 WARNING nova.compute.resource_tracker [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] No compute node record for compute-0.ctlplane.example.com:6e7a5a7f-91c3-4b82-b43d-f32569e61608: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 6e7a5a7f-91c3-4b82-b43d-f32569e61608 could not be found.
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.818 185517 INFO nova.compute.resource_tracker [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 6e7a5a7f-91c3-4b82-b43d-f32569e61608
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.888 185517 DEBUG nova.compute.resource_tracker [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:28:28 compute-0 nova_compute[185513]: 2026-01-06 15:28:28.888 185517 DEBUG nova.compute.resource_tracker [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:28:29 compute-0 podman[185835]: 2026-01-06 15:28:29.79705344 +0000 UTC m=+0.068725440 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 06 15:28:29 compute-0 nova_compute[185513]: 2026-01-06 15:28:29.895 185517 INFO nova.scheduler.client.report [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] [req-aa59657f-8dd7-41ab-8b35-2c25e1e397b8] Created resource provider record via placement API for resource provider with UUID 6e7a5a7f-91c3-4b82-b43d-f32569e61608 and name compute-0.ctlplane.example.com.
Jan 06 15:28:30 compute-0 nova_compute[185513]: 2026-01-06 15:28:30.275 185517 DEBUG nova.virt.libvirt.host [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 06 15:28:30 compute-0 nova_compute[185513]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 06 15:28:30 compute-0 nova_compute[185513]: 2026-01-06 15:28:30.275 185517 INFO nova.virt.libvirt.host [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] kernel doesn't support AMD SEV
Jan 06 15:28:30 compute-0 nova_compute[185513]: 2026-01-06 15:28:30.276 185517 DEBUG nova.compute.provider_tree [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Updating inventory in ProviderTree for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 06 15:28:30 compute-0 nova_compute[185513]: 2026-01-06 15:28:30.276 185517 DEBUG nova.virt.libvirt.driver [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 06 15:28:30 compute-0 nova_compute[185513]: 2026-01-06 15:28:30.331 185517 DEBUG nova.scheduler.client.report [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Updated inventory for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 06 15:28:30 compute-0 nova_compute[185513]: 2026-01-06 15:28:30.331 185517 DEBUG nova.compute.provider_tree [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Updating resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 06 15:28:30 compute-0 nova_compute[185513]: 2026-01-06 15:28:30.331 185517 DEBUG nova.compute.provider_tree [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Updating inventory in ProviderTree for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 06 15:28:30 compute-0 nova_compute[185513]: 2026-01-06 15:28:30.464 185517 DEBUG nova.compute.provider_tree [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Updating resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 06 15:28:30 compute-0 nova_compute[185513]: 2026-01-06 15:28:30.502 185517 DEBUG nova.compute.resource_tracker [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:28:30 compute-0 nova_compute[185513]: 2026-01-06 15:28:30.503 185517 DEBUG oslo_concurrency.lockutils [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:28:30 compute-0 nova_compute[185513]: 2026-01-06 15:28:30.504 185517 DEBUG nova.service [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 06 15:28:30 compute-0 nova_compute[185513]: 2026-01-06 15:28:30.644 185517 DEBUG nova.service [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 06 15:28:30 compute-0 nova_compute[185513]: 2026-01-06 15:28:30.644 185517 DEBUG nova.servicegroup.drivers.db [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 06 15:28:32 compute-0 sshd-session[185854]: Accepted publickey for zuul from 192.168.122.30 port 36476 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 15:28:32 compute-0 systemd-logind[791]: New session 26 of user zuul.
Jan 06 15:28:32 compute-0 systemd[1]: Started Session 26 of User zuul.
Jan 06 15:28:32 compute-0 sshd-session[185854]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:28:33 compute-0 python3.9[186007]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:28:34 compute-0 sudo[186161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkdhelgikoibfmxffermdcppibjpoiva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713313.904978-31-46929090455333/AnsiballZ_systemd_service.py'
Jan 06 15:28:34 compute-0 sudo[186161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:34 compute-0 python3.9[186163]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 06 15:28:34 compute-0 systemd[1]: Reloading.
Jan 06 15:28:34 compute-0 systemd-sysv-generator[186191]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:28:35 compute-0 systemd-rc-local-generator[186185]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:28:35 compute-0 sudo[186161]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:36 compute-0 python3.9[186348]: ansible-ansible.builtin.service_facts Invoked
Jan 06 15:28:36 compute-0 network[186365]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 06 15:28:36 compute-0 network[186366]: 'network-scripts' will be removed from distribution in near future.
Jan 06 15:28:36 compute-0 network[186367]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 06 15:28:41 compute-0 sudo[186637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvxceemzcvvvbjzbjloldyiackblbajn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713321.116824-50-276552696381096/AnsiballZ_systemd_service.py'
Jan 06 15:28:41 compute-0 sudo[186637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:41 compute-0 python3.9[186639]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:28:41 compute-0 sudo[186637]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:42 compute-0 sudo[186790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfpaojxhlqrfrzxbcepaaaengiibdown ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713322.1280441-60-137649309839868/AnsiballZ_file.py'
Jan 06 15:28:42 compute-0 sudo[186790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:42 compute-0 python3.9[186792]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:28:42 compute-0 sudo[186790]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:42 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 06 15:28:42 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 06 15:28:42 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 06 15:28:43 compute-0 sudo[186943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrprufheskrasfpdnqninlwsfxgmqbpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713323.0097876-68-144868869674731/AnsiballZ_file.py'
Jan 06 15:28:43 compute-0 sudo[186943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:43 compute-0 python3.9[186945]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:28:43 compute-0 sudo[186943]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:44 compute-0 sudo[187095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjymadsszsjjhndvwqgfbysanvgzxgja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713323.8193243-77-24398012709076/AnsiballZ_command.py'
Jan 06 15:28:44 compute-0 sudo[187095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:44 compute-0 python3.9[187097]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:28:44 compute-0 sudo[187095]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:45 compute-0 python3.9[187249]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 06 15:28:45 compute-0 sudo[187399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwzmhygvbgcqiiutzudzvtujnbjuttxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713325.58179-95-212069250419922/AnsiballZ_systemd_service.py'
Jan 06 15:28:45 compute-0 sudo[187399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:46 compute-0 python3.9[187401]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 06 15:28:46 compute-0 systemd[1]: Reloading.
Jan 06 15:28:46 compute-0 systemd-sysv-generator[187433]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:28:46 compute-0 systemd-rc-local-generator[187428]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:28:46 compute-0 sudo[187399]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:46 compute-0 sudo[187587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlmqbozdfplmliutsggmjrmtzqrtoayv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713326.7054222-103-30964885015513/AnsiballZ_command.py'
Jan 06 15:28:46 compute-0 sudo[187587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:47 compute-0 python3.9[187589]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:28:47 compute-0 sudo[187587]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:47 compute-0 sudo[187740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvlsocjydyyckarxlezzhfqpvwnafgnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713327.463865-112-230530686069229/AnsiballZ_file.py'
Jan 06 15:28:47 compute-0 sudo[187740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:47 compute-0 python3.9[187742]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:28:48 compute-0 sudo[187740]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:48 compute-0 python3.9[187892]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:28:49 compute-0 sudo[188060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmhxyxtbcceuknurxeoadfxuygbjhbuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713329.012616-128-14109984472304/AnsiballZ_group.py'
Jan 06 15:28:49 compute-0 sudo[188060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:49 compute-0 podman[188018]: 2026-01-06 15:28:49.508762138 +0000 UTC m=+0.120421742 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 06 15:28:49 compute-0 python3.9[188064]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Jan 06 15:28:49 compute-0 sudo[188060]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:50 compute-0 sudo[188224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgxlnxbgeppelfhecxfhsrthfatexljo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713329.9239066-139-195757839269945/AnsiballZ_getent.py'
Jan 06 15:28:50 compute-0 sudo[188224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:50 compute-0 python3.9[188226]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Jan 06 15:28:50 compute-0 sudo[188224]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:50 compute-0 sudo[188377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjpkmkwlnxqoazsuhmhcsmwgvyedjfmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713330.6315718-147-280090348735128/AnsiballZ_group.py'
Jan 06 15:28:50 compute-0 sudo[188377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:51 compute-0 python3.9[188379]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 06 15:28:51 compute-0 groupadd[188380]: group added to /etc/group: name=ceilometer, GID=42405
Jan 06 15:28:51 compute-0 groupadd[188380]: group added to /etc/gshadow: name=ceilometer
Jan 06 15:28:51 compute-0 groupadd[188380]: new group: name=ceilometer, GID=42405
Jan 06 15:28:51 compute-0 sudo[188377]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:51 compute-0 sudo[188535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djfxbztiuntukqeuwabikqjvnnpfwmhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713331.329626-155-15307589244467/AnsiballZ_user.py'
Jan 06 15:28:51 compute-0 sudo[188535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:28:52 compute-0 python3.9[188537]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 06 15:28:52 compute-0 useradd[188539]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Jan 06 15:28:52 compute-0 useradd[188539]: add 'ceilometer' to group 'libvirt'
Jan 06 15:28:52 compute-0 useradd[188539]: add 'ceilometer' to shadow group 'libvirt'
Jan 06 15:28:52 compute-0 sudo[188535]: pam_unix(sudo:session): session closed for user root
Jan 06 15:28:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:28:53.665 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:28:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:28:53.666 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:28:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:28:53.666 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:28:53 compute-0 python3.9[188695]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:28:54 compute-0 python3.9[188816]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767713333.3593147-181-212750708514169/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:28:55 compute-0 python3.9[188966]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:28:55 compute-0 python3.9[189087]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767713334.7912295-181-18445628553583/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:28:56 compute-0 python3.9[189237]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:28:57 compute-0 python3.9[189358]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767713335.9245896-181-274035018060779/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:28:57 compute-0 python3.9[189508]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:28:58 compute-0 python3.9[189660]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:28:59 compute-0 python3.9[189812]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:28:59 compute-0 python3.9[189933]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767713338.6431003-240-171427351695423/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:29:00 compute-0 podman[190057]: 2026-01-06 15:29:00.335754082 +0000 UTC m=+0.071907697 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 06 15:29:00 compute-0 python3.9[190093]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:29:01 compute-0 python3.9[190221]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/openstack_network_exporter.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767713339.9613824-240-92006970071554/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=87dede51a10e22722618c1900db75cb764463d91 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:29:01 compute-0 python3.9[190371]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:29:02 compute-0 python3.9[190492]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767713341.3484113-269-39765663673207/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:29:03 compute-0 python3.9[190642]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:29:03 compute-0 python3.9[190763]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713342.7318265-285-198731698120813/.source.yaml _original_basename=node_exporter.yaml follow=False checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:29:04 compute-0 python3.9[190913]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:29:05 compute-0 python3.9[191034]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713343.996496-300-182073240944842/.source.yaml _original_basename=podman_exporter.yaml follow=False checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:29:05 compute-0 python3.9[191184]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:29:06 compute-0 python3.9[191305]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713345.2650087-315-90093133785808/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:29:06 compute-0 sudo[191455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlxylvfdyuizpppxmgnnbzspncouxjpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713346.527181-330-143272700205039/AnsiballZ_file.py'
Jan 06 15:29:06 compute-0 sudo[191455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:07 compute-0 python3.9[191457]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:29:07 compute-0 sudo[191455]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:07 compute-0 sudo[191607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyfrwexjhtehjpjwnltzjscttdbhawhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713347.4228482-338-57951039819078/AnsiballZ_file.py'
Jan 06 15:29:07 compute-0 sudo[191607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:07 compute-0 python3.9[191609]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:29:07 compute-0 sudo[191607]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:08 compute-0 python3.9[191759]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:29:09 compute-0 python3.9[191911]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:29:10 compute-0 python3.9[192063]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:29:10 compute-0 sudo[192215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yreengcfqshntazxkfpbbvafbgryslix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713350.5697565-370-46008756401499/AnsiballZ_file.py'
Jan 06 15:29:10 compute-0 sudo[192215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:11 compute-0 python3.9[192217]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:29:11 compute-0 sudo[192215]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:11 compute-0 sudo[192367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukrhgpasneveppdheboxtpbzlykylmsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713351.34849-378-222752099693124/AnsiballZ_systemd_service.py'
Jan 06 15:29:11 compute-0 sudo[192367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:11 compute-0 python3.9[192369]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:29:11 compute-0 systemd[1]: Reloading.
Jan 06 15:29:12 compute-0 systemd-sysv-generator[192403]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:29:12 compute-0 systemd-rc-local-generator[192399]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:29:12 compute-0 systemd[1]: Listening on Podman API Socket.
Jan 06 15:29:12 compute-0 sudo[192367]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:13 compute-0 sudo[192558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xazurkbnddscsjrtojdzbrxqtsyciacr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713352.6921535-387-470465045903/AnsiballZ_stat.py'
Jan 06 15:29:13 compute-0 sudo[192558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:13 compute-0 python3.9[192560]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:29:13 compute-0 sudo[192558]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:13 compute-0 sudo[192681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmvmvzahwwjijoifitijivtqeknhxjkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713352.6921535-387-470465045903/AnsiballZ_copy.py'
Jan 06 15:29:13 compute-0 sudo[192681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:14 compute-0 python3.9[192683]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767713352.6921535-387-470465045903/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:29:14 compute-0 sudo[192681]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:14 compute-0 sudo[192757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovaqnxfedkndplqfltgtjjwjnsjmfuyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713352.6921535-387-470465045903/AnsiballZ_stat.py'
Jan 06 15:29:14 compute-0 sudo[192757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:14 compute-0 python3.9[192759]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:29:14 compute-0 sudo[192757]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:15 compute-0 sudo[192880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nawdtockskpzlmslyihkjctsqnumxgev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713352.6921535-387-470465045903/AnsiballZ_copy.py'
Jan 06 15:29:15 compute-0 sudo[192880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:15 compute-0 python3.9[192882]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767713352.6921535-387-470465045903/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:29:15 compute-0 sudo[192880]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:16 compute-0 sudo[193032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piycfpdrdrtrgkconxifjychzjrujnpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713356.3785882-419-10361001924702/AnsiballZ_file.py'
Jan 06 15:29:16 compute-0 sudo[193032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:16 compute-0 python3.9[193034]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:29:17 compute-0 sudo[193032]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:17 compute-0 sudo[193184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvaffflfsyvtbdcvhzysdcanzuxxqgfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713357.2047663-427-246666878556937/AnsiballZ_file.py'
Jan 06 15:29:17 compute-0 sudo[193184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:17 compute-0 python3.9[193186]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:29:17 compute-0 sudo[193184]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:18 compute-0 sudo[193336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtcxeoupbnemxysfjsatzuvcumwoktwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713357.9911022-435-12001829964499/AnsiballZ_stat.py'
Jan 06 15:29:18 compute-0 sudo[193336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:18 compute-0 python3.9[193338]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:29:18 compute-0 sudo[193336]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:18 compute-0 sudo[193459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeuypabocffysiscbfmodxpwmujqscrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713357.9911022-435-12001829964499/AnsiballZ_copy.py'
Jan 06 15:29:18 compute-0 sudo[193459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:19 compute-0 python3.9[193461]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713357.9911022-435-12001829964499/.source.json _original_basename=.ys2r83fw follow=False checksum=ce2b0c83293a970bafffa087afa083dd7c93a79c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:29:19 compute-0 sudo[193459]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:19 compute-0 podman[193585]: 2026-01-06 15:29:19.962743933 +0000 UTC m=+0.214978491 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 06 15:29:19 compute-0 python3.9[193622]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:29:20 compute-0 nova_compute[185513]: 2026-01-06 15:29:20.647 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:29:20 compute-0 nova_compute[185513]: 2026-01-06 15:29:20.676 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:29:22 compute-0 sudo[194057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndjdkvrutnljcfjildxtqmwjffioeirh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713361.775099-475-29725043390940/AnsiballZ_container_config_data.py'
Jan 06 15:29:22 compute-0 sudo[194057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:22 compute-0 python3.9[194059]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_pattern=*.json debug=False
Jan 06 15:29:22 compute-0 sudo[194057]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:23 compute-0 sudo[194209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iilyczwpaikarphpiptoldzvloljlxiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713362.9128222-486-190626283445644/AnsiballZ_container_config_hash.py'
Jan 06 15:29:23 compute-0 sudo[194209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:23 compute-0 python3.9[194211]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 06 15:29:23 compute-0 sudo[194209]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:24 compute-0 sudo[194361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjwmwyuvucrtdcpsawqdhuhekfruwqzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713363.8244002-495-78025711055871/AnsiballZ_podman_container_info.py'
Jan 06 15:29:24 compute-0 sudo[194361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:24 compute-0 python3.9[194363]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 06 15:29:24 compute-0 sudo[194361]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:25 compute-0 sudo[194539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skphwtwpvavevokzwdnxhkvztsmhabnq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767713365.2516925-508-165938713065112/AnsiballZ_edpm_container_manage.py'
Jan 06 15:29:25 compute-0 sudo[194539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:26 compute-0 python3[194541]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_id=ceilometer_agent_compute config_overrides={} config_patterns=*.json containers=['ceilometer_agent_compute'] log_base_path=/var/log/containers/stdouts debug=False
Jan 06 15:29:26 compute-0 podman[194579]: 2026-01-06 15:29:26.40366863 +0000 UTC m=+0.069195382 container create 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS)
Jan 06 15:29:26 compute-0 podman[194579]: 2026-01-06 15:29:26.361034047 +0000 UTC m=+0.026560849 image pull 6e61bfccaf21ee9962f8af7b3bc33737123ae362fb340f43cd517263f3ab794c quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Jan 06 15:29:26 compute-0 python3[194541]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6 --healthcheck-command /openstack/healthcheck compute --label config_id=ceilometer_agent_compute --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Jan 06 15:29:26 compute-0 sudo[194539]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.026 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.026 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.027 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:29:27 compute-0 sudo[194767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjctmqqifsfrssobafotgkxicfwkakws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713366.730114-516-96369777325647/AnsiballZ_stat.py'
Jan 06 15:29:27 compute-0 sudo[194767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:27 compute-0 python3.9[194769]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:29:27 compute-0 sudo[194767]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.288 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.289 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.289 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.289 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.289 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.290 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.290 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.290 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.290 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.317 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.317 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.317 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.318 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.460 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.462 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6004MB free_disk=72.65343856811523GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.462 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.463 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.582 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.582 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.621 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.651 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.654 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:29:27 compute-0 nova_compute[185513]: 2026-01-06 15:29:27.655 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:29:27 compute-0 sudo[194921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoxkuukciskpybweaortlkqmwrcxvein ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713367.6196067-525-247180462904463/AnsiballZ_file.py'
Jan 06 15:29:27 compute-0 sudo[194921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:28 compute-0 python3.9[194923]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:29:28 compute-0 sudo[194921]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:28 compute-0 sudo[194997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzkxzevamfcebuajyuvayngwrhrylltg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713367.6196067-525-247180462904463/AnsiballZ_stat.py'
Jan 06 15:29:28 compute-0 sudo[194997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:28 compute-0 python3.9[194999]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:29:28 compute-0 sudo[194997]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:29 compute-0 sudo[195148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtimqlkczakdzexbrewtwsydospdygfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713368.6224792-525-178657040833478/AnsiballZ_copy.py'
Jan 06 15:29:29 compute-0 sudo[195148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:29 compute-0 python3.9[195150]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767713368.6224792-525-178657040833478/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:29:29 compute-0 sudo[195148]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:29 compute-0 sudo[195224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clamcjrliyvhmxfxqljnjaqzepagxuzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713368.6224792-525-178657040833478/AnsiballZ_systemd.py'
Jan 06 15:29:29 compute-0 sudo[195224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:30 compute-0 python3.9[195226]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 06 15:29:30 compute-0 systemd[1]: Reloading.
Jan 06 15:29:30 compute-0 systemd-sysv-generator[195259]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:29:30 compute-0 systemd-rc-local-generator[195255]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:29:30 compute-0 sudo[195224]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:30 compute-0 podman[195263]: 2026-01-06 15:29:30.613306813 +0000 UTC m=+0.090586675 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 06 15:29:30 compute-0 sudo[195356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyixgnmkvmydlynpyiarufbibfbxvgeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713368.6224792-525-178657040833478/AnsiballZ_systemd.py'
Jan 06 15:29:30 compute-0 sudo[195356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:31 compute-0 python3.9[195358]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:29:31 compute-0 systemd[1]: Reloading.
Jan 06 15:29:31 compute-0 systemd-rc-local-generator[195385]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:29:31 compute-0 systemd-sysv-generator[195389]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:29:31 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Jan 06 15:29:31 compute-0 systemd[1]: Started libcrun container.
Jan 06 15:29:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be1ae79cf0c1ac290c6fd17b81236c543b335fdebb1a94dd77d231e941785ab/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 06 15:29:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be1ae79cf0c1ac290c6fd17b81236c543b335fdebb1a94dd77d231e941785ab/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 06 15:29:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be1ae79cf0c1ac290c6fd17b81236c543b335fdebb1a94dd77d231e941785ab/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 06 15:29:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be1ae79cf0c1ac290c6fd17b81236c543b335fdebb1a94dd77d231e941785ab/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 06 15:29:31 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32.
Jan 06 15:29:31 compute-0 podman[195397]: 2026-01-06 15:29:31.677402108 +0000 UTC m=+0.172738538 container init 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: + sudo -E kolla_set_configs
Jan 06 15:29:31 compute-0 podman[195397]: 2026-01-06 15:29:31.713479014 +0000 UTC m=+0.208815394 container start 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute)
Jan 06 15:29:31 compute-0 podman[195397]: ceilometer_agent_compute
Jan 06 15:29:31 compute-0 sudo[195419]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: sudo: unable to send audit message: Operation not permitted
Jan 06 15:29:31 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Jan 06 15:29:31 compute-0 sudo[195419]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 06 15:29:31 compute-0 sudo[195356]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:31 compute-0 podman[195420]: 2026-01-06 15:29:31.771877916 +0000 UTC m=+0.048479195 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 06 15:29:31 compute-0 systemd[1]: 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32-3e817466b7154b4f.service: Main process exited, code=exited, status=1/FAILURE
Jan 06 15:29:31 compute-0 systemd[1]: 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32-3e817466b7154b4f.service: Failed with result 'exit-code'.
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: INFO:__main__:Validating config file
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: INFO:__main__:Copying service configuration files
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: INFO:__main__:Writing out command to execute
Jan 06 15:29:31 compute-0 sudo[195419]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: ++ cat /run_command
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: + ARGS=
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: + sudo kolla_copy_cacerts
Jan 06 15:29:31 compute-0 sudo[195465]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: sudo: unable to send audit message: Operation not permitted
Jan 06 15:29:31 compute-0 sudo[195465]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 06 15:29:31 compute-0 sudo[195465]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: + [[ ! -n '' ]]
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: + . kolla_extend_start
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: + umask 0022
Jan 06 15:29:31 compute-0 ceilometer_agent_compute[195413]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.640 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.641 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.641 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.641 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.641 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.641 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.641 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.641 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.641 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.641 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.641 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.642 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.642 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.642 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.642 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.642 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.642 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.642 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.642 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.642 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.642 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.643 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.643 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.643 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.643 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.643 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.643 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.643 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.643 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.643 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.643 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.643 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.644 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.644 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.644 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.644 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.644 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.644 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.644 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.644 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.644 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.644 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.644 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.644 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.644 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.645 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.645 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.645 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.645 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.645 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.645 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.645 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.645 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.645 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.645 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.645 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.645 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.646 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.646 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.646 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.646 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.646 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.646 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.646 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.646 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.646 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.646 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.646 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.646 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.646 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.647 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.647 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.647 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.647 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.647 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.647 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.647 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.647 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.647 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.647 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.647 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.648 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.648 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.648 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.648 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.648 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.648 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.648 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.648 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.648 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.648 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.648 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.649 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.649 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.649 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.649 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.649 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.649 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.649 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.649 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.649 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.649 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.649 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.649 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.650 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.650 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.650 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.650 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.650 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.650 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.650 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.650 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.650 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.650 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.650 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.650 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.651 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.651 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.651 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.651 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.651 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.651 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.651 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.651 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.651 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.651 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.651 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.651 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.652 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.652 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.652 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.652 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.652 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.652 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.652 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.652 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.652 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.652 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.652 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.652 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.652 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.653 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.653 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.653 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.653 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.653 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.653 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.653 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.653 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.653 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.653 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.653 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.653 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.674 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.675 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.675 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.675 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.675 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.675 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.675 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.675 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.675 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.675 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.676 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.676 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.676 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.676 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.676 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.676 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.676 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.676 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.676 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.676 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.676 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.676 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.676 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.676 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.676 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.677 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.677 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.677 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.677 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.677 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.677 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.677 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.677 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.677 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.677 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.677 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.677 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.677 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.677 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.677 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.677 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.677 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.677 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.678 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.678 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.678 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.678 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.678 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.678 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.678 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.678 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.678 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.678 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.678 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.678 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.678 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.678 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.678 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.679 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.679 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.679 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.679 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.679 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.679 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.679 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.679 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.679 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.679 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.679 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.680 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.680 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.680 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.680 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.680 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.680 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.680 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.680 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.680 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.680 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.680 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.680 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.680 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.680 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.681 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.681 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.681 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.681 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.681 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.681 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.681 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.681 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.681 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.681 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.681 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.681 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.681 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.682 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.682 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.682 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.682 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.682 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.682 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.682 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.682 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.682 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.682 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.682 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.682 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.682 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.683 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.683 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.683 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.683 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.683 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.683 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.683 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.683 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.683 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.683 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.683 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.683 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.683 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.683 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.683 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.683 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.684 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.684 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.684 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.684 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.684 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.684 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.684 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.684 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.684 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.684 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.684 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.684 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.684 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.684 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.684 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.684 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.684 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.685 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.685 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.685 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.685 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.685 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.685 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.685 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.685 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.685 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.685 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.685 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.685 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.687 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.689 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.689 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Jan 06 15:29:32 compute-0 python3.9[195594]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.903 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.912 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.913 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 06 15:29:32 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:32.913 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.029 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.029 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.029 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.029 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.029 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.029 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.029 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.029 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.029 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.030 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.030 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.030 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.030 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.030 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.030 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.030 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.030 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.030 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.031 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.031 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.031 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.031 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.031 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.031 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.031 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.031 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.031 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.031 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.031 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.031 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.032 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.032 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.032 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.032 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.032 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.032 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.032 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.032 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.032 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.032 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.032 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.032 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.032 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.033 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.033 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.033 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.033 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.033 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.033 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.033 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.033 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.033 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.033 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.033 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.033 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.034 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.034 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.034 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.034 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.034 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.034 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.034 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.034 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.034 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.034 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.034 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.034 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.034 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.035 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.035 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.035 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.035 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.035 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.035 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.035 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.035 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.035 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.035 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.035 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.036 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.036 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.036 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.036 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.036 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.036 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.036 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.036 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.036 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.036 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.036 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.037 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.037 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.037 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.037 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.037 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.037 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.037 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.037 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.037 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.037 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.037 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.037 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.037 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.037 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.038 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.038 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.038 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.038 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.038 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.038 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.038 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.038 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.038 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.038 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.038 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.038 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.038 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.039 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.039 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.039 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.039 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.039 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.039 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.039 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.039 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.039 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.039 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.039 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.039 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.039 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.039 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.039 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.040 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.040 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.040 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.040 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.040 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.040 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.040 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.040 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.040 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.040 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.040 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.040 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.040 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.040 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.040 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.041 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.041 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.041 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.041 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.041 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.041 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.041 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.041 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.041 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.041 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.041 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.041 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.041 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.042 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.042 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.042 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.042 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.042 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.042 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.042 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.042 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.042 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.044 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.064 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.064 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.068 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:29:33.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:29:33 compute-0 sudo[195757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baodmgjmqtfcddtewvcecnecbgiykrqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713373.0842388-566-211623994456483/AnsiballZ_stat.py'
Jan 06 15:29:33 compute-0 sudo[195757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:33 compute-0 python3.9[195759]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:29:33 compute-0 sudo[195757]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:34 compute-0 sudo[195882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcvasdruznfxcurqntrbhghuydwbysoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713373.0842388-566-211623994456483/AnsiballZ_copy.py'
Jan 06 15:29:34 compute-0 sudo[195882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:34 compute-0 python3.9[195884]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713373.0842388-566-211623994456483/.source.yaml _original_basename=.zsm9jt23 follow=False checksum=70d81709102df5344af5f8fd1d661b4e977003a3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:29:34 compute-0 sudo[195882]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:34 compute-0 sudo[196034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zemtzeaqkdgkjxhgfzgsxufuxanprlxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713374.5567138-581-218584647511278/AnsiballZ_stat.py'
Jan 06 15:29:34 compute-0 sudo[196034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:35 compute-0 python3.9[196036]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:29:35 compute-0 sudo[196034]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:35 compute-0 sudo[196157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtwcjjegqtmfinvysxhuxxvgyboubwgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713374.5567138-581-218584647511278/AnsiballZ_copy.py'
Jan 06 15:29:35 compute-0 sudo[196157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:35 compute-0 python3.9[196159]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767713374.5567138-581-218584647511278/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:29:35 compute-0 sudo[196157]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:36 compute-0 sudo[196309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-minpfjvdumneiiuijbgqggpahzncqeli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713376.53347-602-274764517178469/AnsiballZ_file.py'
Jan 06 15:29:36 compute-0 sudo[196309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:37 compute-0 python3.9[196311]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:29:37 compute-0 sudo[196309]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:37 compute-0 sudo[196461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhlugkzcsojxyothiblahkgkgvdwretj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713377.3240554-610-157130473696850/AnsiballZ_file.py'
Jan 06 15:29:37 compute-0 sudo[196461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:37 compute-0 python3.9[196463]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:29:37 compute-0 sudo[196461]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:38 compute-0 sudo[196613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpmeoqmivkkpzetwrvzwolzcxxizejuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713378.0987687-618-196481000149518/AnsiballZ_stat.py'
Jan 06 15:29:38 compute-0 sudo[196613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:38 compute-0 python3.9[196615]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:29:38 compute-0 sudo[196613]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:38 compute-0 sudo[196691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xanoyytnxjqjsnbkpnxnpkkvydtcxwpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713378.0987687-618-196481000149518/AnsiballZ_file.py'
Jan 06 15:29:38 compute-0 sudo[196691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:39 compute-0 python3.9[196693]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.kh96qqra recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:29:39 compute-0 sudo[196691]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:39 compute-0 python3.9[196843]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/node_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:29:41 compute-0 sudo[197264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccnchalsdiqgjfudefqamtnjtcrvqxwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713381.5899477-655-157668377253646/AnsiballZ_container_config_data.py'
Jan 06 15:29:41 compute-0 sudo[197264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:42 compute-0 python3.9[197266]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/node_exporter config_pattern=*.json debug=False
Jan 06 15:29:42 compute-0 sudo[197264]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:42 compute-0 sudo[197416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvjyibhhvmqjrojoncirnnodgapntohd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713382.531997-666-212575109449547/AnsiballZ_container_config_hash.py'
Jan 06 15:29:42 compute-0 sudo[197416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:43 compute-0 python3.9[197418]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 06 15:29:43 compute-0 sudo[197416]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:43 compute-0 sudo[197568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djdgtipgjdpaswqnycvjvcjnqkkwypsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713383.3551462-675-275683870973055/AnsiballZ_podman_container_info.py'
Jan 06 15:29:43 compute-0 sudo[197568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:43 compute-0 python3.9[197570]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 06 15:29:44 compute-0 sudo[197568]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:44 compute-0 sudo[197747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axnmothnfnsokxzeiqrocgszwysmhpkv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767713384.705602-688-134827153112177/AnsiballZ_edpm_container_manage.py'
Jan 06 15:29:44 compute-0 sudo[197747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:45 compute-0 python3[197749]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/node_exporter config_id=node_exporter config_overrides={} config_patterns=*.json containers=['node_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 06 15:29:45 compute-0 podman[197785]: 2026-01-06 15:29:45.483348963 +0000 UTC m=+0.066983159 container create 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=node_exporter, container_name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 15:29:45 compute-0 podman[197785]: 2026-01-06 15:29:45.44158211 +0000 UTC m=+0.025216356 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Jan 06 15:29:45 compute-0 python3[197749]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6 --healthcheck-command /openstack/healthcheck node_exporter --label config_id=node_exporter --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Jan 06 15:29:45 compute-0 sudo[197747]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:46 compute-0 sudo[197972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwgjywgpwyncbcmnhnwxrhbalmkyxvao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713385.8472717-696-133072079066143/AnsiballZ_stat.py'
Jan 06 15:29:46 compute-0 sudo[197972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:46 compute-0 python3.9[197974]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:29:46 compute-0 sudo[197972]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:46 compute-0 sudo[198126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqkcmehkpmihjpmvqhdbewdustucrfyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713386.570325-705-111374055995317/AnsiballZ_file.py'
Jan 06 15:29:46 compute-0 sudo[198126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:47 compute-0 python3.9[198128]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:29:47 compute-0 sudo[198126]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:47 compute-0 sudo[198202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npamvebnwmllwxfgbkrseeueapbchmnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713386.570325-705-111374055995317/AnsiballZ_stat.py'
Jan 06 15:29:47 compute-0 sudo[198202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:47 compute-0 python3.9[198204]: ansible-stat Invoked with path=/etc/systemd/system/edpm_node_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:29:47 compute-0 sudo[198202]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:48 compute-0 sudo[198353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odxcvanzqjyrxyqgyiojustgseymsbyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713387.6466691-705-62540082167201/AnsiballZ_copy.py'
Jan 06 15:29:48 compute-0 sudo[198353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:48 compute-0 python3.9[198355]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767713387.6466691-705-62540082167201/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:29:48 compute-0 sudo[198353]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:48 compute-0 sudo[198429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xowochflzijoetzyagpvqxatrfkouwul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713387.6466691-705-62540082167201/AnsiballZ_systemd.py'
Jan 06 15:29:48 compute-0 sudo[198429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:49 compute-0 python3.9[198431]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 06 15:29:49 compute-0 systemd[1]: Reloading.
Jan 06 15:29:49 compute-0 systemd-rc-local-generator[198454]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:29:49 compute-0 systemd-sysv-generator[198459]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:29:49 compute-0 sudo[198429]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:49 compute-0 sudo[198540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrdysysorytboqkwitcuwpnrlfhycdya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713387.6466691-705-62540082167201/AnsiballZ_systemd.py'
Jan 06 15:29:49 compute-0 sudo[198540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:50 compute-0 python3.9[198542]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:29:50 compute-0 systemd[1]: Reloading.
Jan 06 15:29:50 compute-0 podman[198544]: 2026-01-06 15:29:50.29058584 +0000 UTC m=+0.121935960 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller)
Jan 06 15:29:50 compute-0 systemd-rc-local-generator[198596]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:29:50 compute-0 systemd-sysv-generator[198602]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:29:50 compute-0 systemd[1]: Starting node_exporter container...
Jan 06 15:29:50 compute-0 systemd[1]: Started libcrun container.
Jan 06 15:29:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d19ef6e90da388444c7469560b8994154fa0516d70ee0274962386113f94b9c/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 06 15:29:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d19ef6e90da388444c7469560b8994154fa0516d70ee0274962386113f94b9c/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 06 15:29:50 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e.
Jan 06 15:29:50 compute-0 podman[198608]: 2026-01-06 15:29:50.753489784 +0000 UTC m=+0.168703804 container init 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.783Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.784Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.784Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.784Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.785Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.785Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.785Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.785Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=arp
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=bcache
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=bonding
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=btrfs
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=conntrack
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=cpu
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=cpufreq
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=diskstats
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=edac
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=fibrechannel
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=filefd
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=filesystem
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=infiniband
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=ipvs
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=loadavg
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=mdadm
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=meminfo
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=netclass
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=netdev
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=netstat
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=nfs
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=nfsd
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=nvme
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=schedstat
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=sockstat
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=softnet
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=systemd
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=tapestats
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=udp_queues
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=vmstat
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.786Z caller=node_exporter.go:117 level=info collector=xfs
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.787Z caller=node_exporter.go:117 level=info collector=zfs
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.788Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Jan 06 15:29:50 compute-0 node_exporter[198623]: ts=2026-01-06T15:29:50.789Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Jan 06 15:29:50 compute-0 podman[198608]: 2026-01-06 15:29:50.797069997 +0000 UTC m=+0.212283977 container start 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 15:29:50 compute-0 podman[198608]: node_exporter
Jan 06 15:29:50 compute-0 systemd[1]: Started node_exporter container.
Jan 06 15:29:50 compute-0 sudo[198540]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:50 compute-0 podman[198632]: 2026-01-06 15:29:50.879115546 +0000 UTC m=+0.072793652 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 15:29:51 compute-0 python3.9[198805]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 06 15:29:52 compute-0 sudo[198955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnsptndjosvdycdpajdoftegurrydzzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713392.0002193-746-116631508470272/AnsiballZ_stat.py'
Jan 06 15:29:52 compute-0 sudo[198955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:52 compute-0 python3.9[198957]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:29:52 compute-0 sudo[198955]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:53 compute-0 sudo[199080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dihybzkcoqsxlswnqsnlqmhbalfiaxei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713392.0002193-746-116631508470272/AnsiballZ_copy.py'
Jan 06 15:29:53 compute-0 sudo[199080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:53 compute-0 python3.9[199082]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713392.0002193-746-116631508470272/.source.yaml _original_basename=.smdqi8m2 follow=False checksum=ab6b227c90299e3adb6cb736754e0d6d0366d78f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:29:53 compute-0 sudo[199080]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:29:53.667 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:29:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:29:53.668 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:29:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:29:53.668 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:29:53 compute-0 sudo[199232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwfwiquwqwkfcsyulxglhfsqzkgpytxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713393.5490537-761-240174616013206/AnsiballZ_stat.py'
Jan 06 15:29:53 compute-0 sudo[199232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:54 compute-0 python3.9[199234]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:29:54 compute-0 sudo[199232]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:54 compute-0 sudo[199355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iodfuuapytmxqjvctfkcaygrsoqalmkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713393.5490537-761-240174616013206/AnsiballZ_copy.py'
Jan 06 15:29:54 compute-0 sudo[199355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:54 compute-0 python3.9[199357]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767713393.5490537-761-240174616013206/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:29:54 compute-0 sudo[199355]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:55 compute-0 sudo[199507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pltnnxzerhowbeohpspblphwrivbcaue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713395.1950254-782-60913024701135/AnsiballZ_file.py'
Jan 06 15:29:55 compute-0 sudo[199507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:55 compute-0 python3.9[199509]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:29:55 compute-0 sudo[199507]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:56 compute-0 sudo[199659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwicipuxdierylbicihfebmaartmemaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713396.1328566-790-221565684802728/AnsiballZ_file.py'
Jan 06 15:29:56 compute-0 sudo[199659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:56 compute-0 python3.9[199661]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:29:56 compute-0 sudo[199659]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:57 compute-0 sudo[199811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucyhrfgcqjjkznjzslcokgtanxhlbqja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713396.845309-798-56115516963457/AnsiballZ_stat.py'
Jan 06 15:29:57 compute-0 sudo[199811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:57 compute-0 python3.9[199813]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:29:57 compute-0 sudo[199811]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:57 compute-0 sudo[199889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkqmownsfquuvcyrqsoozfpguiwqqkzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713396.845309-798-56115516963457/AnsiballZ_file.py'
Jan 06 15:29:57 compute-0 sudo[199889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:29:57 compute-0 python3.9[199891]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.xgubmla8 recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:29:57 compute-0 sudo[199889]: pam_unix(sudo:session): session closed for user root
Jan 06 15:29:58 compute-0 python3.9[200041]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/podman_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:30:00 compute-0 podman[200389]: 2026-01-06 15:30:00.822266316 +0000 UTC m=+0.080477655 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 06 15:30:00 compute-0 sudo[200479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocnhwgivclngoyazgqqvluxnqlqwhgbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713400.578579-835-162759243882804/AnsiballZ_container_config_data.py'
Jan 06 15:30:00 compute-0 sudo[200479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:01 compute-0 python3.9[200481]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/podman_exporter config_pattern=*.json debug=False
Jan 06 15:30:01 compute-0 sudo[200479]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:01 compute-0 sudo[200642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktggoitayqqqnaatkncbgwherwzkzviz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713401.5629222-846-96185815305953/AnsiballZ_container_config_hash.py'
Jan 06 15:30:01 compute-0 podman[200605]: 2026-01-06 15:30:01.938051908 +0000 UTC m=+0.074317676 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:30:01 compute-0 sudo[200642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:01 compute-0 systemd[1]: 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32-3e817466b7154b4f.service: Main process exited, code=exited, status=1/FAILURE
Jan 06 15:30:01 compute-0 systemd[1]: 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32-3e817466b7154b4f.service: Failed with result 'exit-code'.
Jan 06 15:30:02 compute-0 python3.9[200652]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 06 15:30:02 compute-0 sudo[200642]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:02 compute-0 sudo[200802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btucospbktkxuipidlrhknflqmyjswka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713402.4520416-855-252545457863922/AnsiballZ_podman_container_info.py'
Jan 06 15:30:02 compute-0 sudo[200802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:03 compute-0 python3.9[200804]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 06 15:30:03 compute-0 sudo[200802]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:04 compute-0 sudo[200982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rswklecqfxjorkgznzmmabfxvysncmbw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767713404.176629-868-46339328109325/AnsiballZ_edpm_container_manage.py'
Jan 06 15:30:04 compute-0 sudo[200982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:04 compute-0 python3[200984]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/podman_exporter config_id=podman_exporter config_overrides={} config_patterns=*.json containers=['podman_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 06 15:30:06 compute-0 podman[200997]: 2026-01-06 15:30:06.418316669 +0000 UTC m=+1.601513042 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Jan 06 15:30:06 compute-0 podman[201092]: 2026-01-06 15:30:06.594547091 +0000 UTC m=+0.074824707 container create 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 15:30:06 compute-0 podman[201092]: 2026-01-06 15:30:06.552553195 +0000 UTC m=+0.032830921 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Jan 06 15:30:06 compute-0 python3[200984]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env CONTAINER_HOST=unix:///run/podman/podman.sock --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6 --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=podman_exporter --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Jan 06 15:30:06 compute-0 sudo[200982]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:07 compute-0 sudo[201281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pagqohhwrtzwiwayxccghpazpwiaokhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713407.0260575-876-145928397654332/AnsiballZ_stat.py'
Jan 06 15:30:07 compute-0 sudo[201281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:07 compute-0 python3.9[201283]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:30:07 compute-0 sudo[201281]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:08 compute-0 sudo[201435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emllxbxikvyyxwxijknsdgnmrgfydwet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713407.9754877-885-90668873844190/AnsiballZ_file.py'
Jan 06 15:30:08 compute-0 sudo[201435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:08 compute-0 python3.9[201437]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:30:08 compute-0 sudo[201435]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:08 compute-0 sudo[201511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aomztqkswkzzkbvmmkijrxshwkhkquym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713407.9754877-885-90668873844190/AnsiballZ_stat.py'
Jan 06 15:30:08 compute-0 sudo[201511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:08 compute-0 python3.9[201513]: ansible-stat Invoked with path=/etc/systemd/system/edpm_podman_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:30:08 compute-0 sudo[201511]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:09 compute-0 sudo[201662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omizuzyuiaylqtjwguarlrwzgviudcgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713409.0642738-885-183041461906259/AnsiballZ_copy.py'
Jan 06 15:30:09 compute-0 sudo[201662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:09 compute-0 python3.9[201664]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767713409.0642738-885-183041461906259/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:30:09 compute-0 sudo[201662]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:10 compute-0 sudo[201738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwiabjpizpndwjrfvrsqnnrgpecfrjrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713409.0642738-885-183041461906259/AnsiballZ_systemd.py'
Jan 06 15:30:10 compute-0 sudo[201738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:10 compute-0 python3.9[201740]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 06 15:30:10 compute-0 systemd[1]: Reloading.
Jan 06 15:30:10 compute-0 systemd-rc-local-generator[201768]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:30:10 compute-0 systemd-sysv-generator[201771]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:30:10 compute-0 sudo[201738]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:10 compute-0 auditd[700]: Audit daemon rotating log files
Jan 06 15:30:10 compute-0 sudo[201849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlpbuzrmyugtjnobecbtrcorfpbmvxca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713409.0642738-885-183041461906259/AnsiballZ_systemd.py'
Jan 06 15:30:10 compute-0 sudo[201849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:11 compute-0 python3.9[201851]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:30:11 compute-0 systemd[1]: Reloading.
Jan 06 15:30:11 compute-0 systemd-rc-local-generator[201880]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:30:11 compute-0 systemd-sysv-generator[201884]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:30:11 compute-0 systemd[1]: Starting podman_exporter container...
Jan 06 15:30:11 compute-0 systemd[1]: Started libcrun container.
Jan 06 15:30:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3fe271f2ffdaf609300f104e9b1565a23520e582515eb277975b602278939f4/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 06 15:30:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3fe271f2ffdaf609300f104e9b1565a23520e582515eb277975b602278939f4/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 06 15:30:11 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f.
Jan 06 15:30:11 compute-0 podman[201891]: 2026-01-06 15:30:11.854404234 +0000 UTC m=+0.165001050 container init 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 15:30:11 compute-0 podman_exporter[201907]: ts=2026-01-06T15:30:11.891Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Jan 06 15:30:11 compute-0 podman_exporter[201907]: ts=2026-01-06T15:30:11.891Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Jan 06 15:30:11 compute-0 podman_exporter[201907]: ts=2026-01-06T15:30:11.891Z caller=handler.go:94 level=info msg="enabled collectors"
Jan 06 15:30:11 compute-0 podman_exporter[201907]: ts=2026-01-06T15:30:11.891Z caller=handler.go:105 level=info collector=container
Jan 06 15:30:11 compute-0 podman[201891]: 2026-01-06 15:30:11.893368793 +0000 UTC m=+0.203965529 container start 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 15:30:11 compute-0 podman[201891]: podman_exporter
Jan 06 15:30:11 compute-0 systemd[1]: Starting Podman API Service...
Jan 06 15:30:11 compute-0 systemd[1]: Started Podman API Service.
Jan 06 15:30:11 compute-0 systemd[1]: Started podman_exporter container.
Jan 06 15:30:11 compute-0 podman[201918]: time="2026-01-06T15:30:11Z" level=info msg="/usr/bin/podman filtering at log level info"
Jan 06 15:30:11 compute-0 podman[201918]: time="2026-01-06T15:30:11Z" level=info msg="Setting parallel job count to 25"
Jan 06 15:30:11 compute-0 podman[201918]: time="2026-01-06T15:30:11Z" level=info msg="Using sqlite as database backend"
Jan 06 15:30:11 compute-0 podman[201918]: time="2026-01-06T15:30:11Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Jan 06 15:30:11 compute-0 podman[201918]: time="2026-01-06T15:30:11Z" level=info msg="Using systemd socket activation to determine API endpoint"
Jan 06 15:30:11 compute-0 podman[201918]: time="2026-01-06T15:30:11Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Jan 06 15:30:11 compute-0 sudo[201849]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:11 compute-0 podman[201918]: @ - - [06/Jan/2026:15:30:11 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Jan 06 15:30:11 compute-0 podman[201918]: time="2026-01-06T15:30:11Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:30:11 compute-0 podman[201918]: @ - - [06/Jan/2026:15:30:11 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 18095 "" "Go-http-client/1.1"
Jan 06 15:30:11 compute-0 podman_exporter[201907]: ts=2026-01-06T15:30:11.983Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Jan 06 15:30:11 compute-0 podman_exporter[201907]: ts=2026-01-06T15:30:11.986Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Jan 06 15:30:11 compute-0 podman_exporter[201907]: ts=2026-01-06T15:30:11.988Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Jan 06 15:30:11 compute-0 podman[201916]: 2026-01-06 15:30:11.988214601 +0000 UTC m=+0.073966269 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 15:30:11 compute-0 systemd[1]: 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f-3cafccb7fe163ade.service: Main process exited, code=exited, status=1/FAILURE
Jan 06 15:30:11 compute-0 systemd[1]: 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f-3cafccb7fe163ade.service: Failed with result 'exit-code'.
Jan 06 15:30:12 compute-0 python3.9[202105]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 06 15:30:13 compute-0 sudo[202255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxsfiosdjroazmsggwycahmtsddcebzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713413.0633433-926-74640953251886/AnsiballZ_stat.py'
Jan 06 15:30:13 compute-0 sudo[202255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:13 compute-0 python3.9[202257]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:30:13 compute-0 sudo[202255]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:14 compute-0 sudo[202380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjiyyiilyemgllzsqmtpgvoeehdvtjkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713413.0633433-926-74640953251886/AnsiballZ_copy.py'
Jan 06 15:30:14 compute-0 sudo[202380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:14 compute-0 python3.9[202382]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713413.0633433-926-74640953251886/.source.yaml _original_basename=.tdfw7b5e follow=False checksum=6a234460eb626a95f6b633c9c3e3ae5d153a3bd8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:30:14 compute-0 sudo[202380]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:15 compute-0 sudo[202532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzyruofnvglxrqghdfaxsqldbuuxlbes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713414.5925174-941-101777626935976/AnsiballZ_stat.py'
Jan 06 15:30:15 compute-0 sudo[202532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:15 compute-0 python3.9[202534]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:30:15 compute-0 sudo[202532]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:15 compute-0 sudo[202655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odroypdvirkhoxewntagawusyjrmwfbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713414.5925174-941-101777626935976/AnsiballZ_copy.py'
Jan 06 15:30:15 compute-0 sudo[202655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:15 compute-0 python3.9[202657]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767713414.5925174-941-101777626935976/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:30:15 compute-0 sudo[202655]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:16 compute-0 sudo[202807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdjsthmujfojgghbudslyiwlcihqxeie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713416.502338-962-8555620658806/AnsiballZ_file.py'
Jan 06 15:30:16 compute-0 sudo[202807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:17 compute-0 python3.9[202809]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:30:17 compute-0 sudo[202807]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:18 compute-0 sudo[202959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-undnxnogyqbrxzwkccidczgqfrzmibwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713417.319961-970-126547601372197/AnsiballZ_file.py'
Jan 06 15:30:18 compute-0 sudo[202959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:18 compute-0 python3.9[202961]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:30:18 compute-0 sudo[202959]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:18 compute-0 sudo[203111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlsiwgirawfgjcihvtyvvnrqlazidzrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713418.4717999-978-178712946858440/AnsiballZ_stat.py'
Jan 06 15:30:18 compute-0 sudo[203111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:18 compute-0 python3.9[203113]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:30:19 compute-0 sudo[203111]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:19 compute-0 sudo[203189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yniqewvldzwqggqnniehkibmpffimgro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713418.4717999-978-178712946858440/AnsiballZ_file.py'
Jan 06 15:30:19 compute-0 sudo[203189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:19 compute-0 python3.9[203191]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=._cvr2lss recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:30:19 compute-0 sudo[203189]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:20 compute-0 python3.9[203341]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:30:20 compute-0 podman[203454]: 2026-01-06 15:30:20.919323539 +0000 UTC m=+0.153468520 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:30:21 compute-0 podman[203518]: 2026-01-06 15:30:21.043263503 +0000 UTC m=+0.080743271 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 06 15:30:22 compute-0 sudo[203813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiimgwmpnyyfggcyzsfyulpzmoegadxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713421.9308631-1015-200185648891944/AnsiballZ_container_config_data.py'
Jan 06 15:30:22 compute-0 sudo[203813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:22 compute-0 python3.9[203815]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_pattern=*.json debug=False
Jan 06 15:30:22 compute-0 sudo[203813]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:23 compute-0 sudo[203965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayzuagaevgxoylniepwadmvsawuihhjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713422.8404145-1026-272867310486789/AnsiballZ_container_config_hash.py'
Jan 06 15:30:23 compute-0 sudo[203965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:23 compute-0 python3.9[203967]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 06 15:30:23 compute-0 sudo[203965]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:24 compute-0 sudo[204117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chtkwjuqltzpknrgjmdsnacsengawshw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713423.6890244-1035-221035636591120/AnsiballZ_podman_container_info.py'
Jan 06 15:30:24 compute-0 sudo[204117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:24 compute-0 python3.9[204119]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 06 15:30:24 compute-0 sudo[204117]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:25 compute-0 sudo[204296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkanabykkwdfcykofieqlgcmufjrcvyf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767713425.2099726-1048-159536258549718/AnsiballZ_edpm_container_manage.py'
Jan 06 15:30:25 compute-0 sudo[204296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:25 compute-0 python3[204298]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_id=openstack_network_exporter config_overrides={} config_patterns=*.json containers=['openstack_network_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 06 15:30:27 compute-0 nova_compute[185513]: 2026-01-06 15:30:27.647 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:30:27 compute-0 nova_compute[185513]: 2026-01-06 15:30:27.648 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:30:27 compute-0 nova_compute[185513]: 2026-01-06 15:30:27.679 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:30:27 compute-0 nova_compute[185513]: 2026-01-06 15:30:27.680 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.022 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.022 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.044 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.045 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.045 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.046 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.046 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.046 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.070 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.070 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.070 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.071 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.224 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.225 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5809MB free_disk=72.44805145263672GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.225 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.225 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.294 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.294 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.323 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.339 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.340 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:30:28 compute-0 nova_compute[185513]: 2026-01-06 15:30:28.340 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:30:28 compute-0 podman[204309]: 2026-01-06 15:30:28.567960199 +0000 UTC m=+2.674114400 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 06 15:30:28 compute-0 podman[204405]: 2026-01-06 15:30:28.776371216 +0000 UTC m=+0.055757447 container create 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1755695350, io.buildah.version=1.33.7)
Jan 06 15:30:28 compute-0 podman[204405]: 2026-01-06 15:30:28.750551264 +0000 UTC m=+0.029937515 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 06 15:30:28 compute-0 python3[204298]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6 --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=openstack_network_exporter --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 06 15:30:28 compute-0 sudo[204296]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:29 compute-0 nova_compute[185513]: 2026-01-06 15:30:29.318 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:30:29 compute-0 sudo[204593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sitgclgwxhnizatemepqfpaalkhizbhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713429.1618974-1056-130013447349832/AnsiballZ_stat.py'
Jan 06 15:30:29 compute-0 sudo[204593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:29 compute-0 python3.9[204595]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:30:29 compute-0 sudo[204593]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:30 compute-0 sudo[204747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qigamyrblunwjbwteoogupgvdaargkmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713430.0066614-1065-107854597277856/AnsiballZ_file.py'
Jan 06 15:30:30 compute-0 sudo[204747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:30 compute-0 python3.9[204749]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:30:30 compute-0 sudo[204747]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:30 compute-0 sudo[204823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdkohjydsehbhphdoltdlpzenbuuiaoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713430.0066614-1065-107854597277856/AnsiballZ_stat.py'
Jan 06 15:30:30 compute-0 sudo[204823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:30 compute-0 podman[204825]: 2026-01-06 15:30:30.994192718 +0000 UTC m=+0.074569181 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 06 15:30:31 compute-0 python3.9[204826]: ansible-stat Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:30:31 compute-0 sudo[204823]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:31 compute-0 sudo[204993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spigugfhgwoujjtxvcfmbnezfiokibdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713431.1965234-1065-138909862783445/AnsiballZ_copy.py'
Jan 06 15:30:31 compute-0 sudo[204993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:31 compute-0 python3.9[204995]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767713431.1965234-1065-138909862783445/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:30:31 compute-0 sudo[204993]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:32 compute-0 sshd-session[205019]: banner exchange: Connection from 3.134.148.59 port 60078: invalid format
Jan 06 15:30:32 compute-0 sudo[205081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltvldizwtkjkjnpchyezuraqcefxkpil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713431.1965234-1065-138909862783445/AnsiballZ_systemd.py'
Jan 06 15:30:32 compute-0 sudo[205081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:32 compute-0 podman[205044]: 2026-01-06 15:30:32.465195707 +0000 UTC m=+0.060868393 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=unhealthy, health_failing_streak=3, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 06 15:30:32 compute-0 systemd[1]: 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32-3e817466b7154b4f.service: Main process exited, code=exited, status=1/FAILURE
Jan 06 15:30:32 compute-0 systemd[1]: 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32-3e817466b7154b4f.service: Failed with result 'exit-code'.
Jan 06 15:30:32 compute-0 python3.9[205089]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 06 15:30:32 compute-0 systemd[1]: Reloading.
Jan 06 15:30:32 compute-0 systemd-sysv-generator[205124]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:30:32 compute-0 systemd-rc-local-generator[205119]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:30:33 compute-0 sudo[205081]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:33 compute-0 sudo[205200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prdxfjhhfziggplzgciyswcmfvdwempd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713431.1965234-1065-138909862783445/AnsiballZ_systemd.py'
Jan 06 15:30:33 compute-0 sudo[205200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:33 compute-0 python3.9[205202]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:30:33 compute-0 systemd[1]: Reloading.
Jan 06 15:30:33 compute-0 systemd-rc-local-generator[205225]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:30:33 compute-0 systemd-sysv-generator[205229]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:30:34 compute-0 systemd[1]: Starting openstack_network_exporter container...
Jan 06 15:30:34 compute-0 systemd[1]: Started libcrun container.
Jan 06 15:30:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f19653a4da26e83084caedb7f0612a371f08f931c4ab81333ecdd881a98608/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 06 15:30:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f19653a4da26e83084caedb7f0612a371f08f931c4ab81333ecdd881a98608/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 06 15:30:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f19653a4da26e83084caedb7f0612a371f08f931c4ab81333ecdd881a98608/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 06 15:30:34 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4.
Jan 06 15:30:34 compute-0 podman[205243]: 2026-01-06 15:30:34.266184813 +0000 UTC m=+0.148200762 container init 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, com.redhat.component=ubi9-minimal-container, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, architecture=x86_64, container_name=openstack_network_exporter, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc.)
Jan 06 15:30:34 compute-0 openstack_network_exporter[205258]: INFO    15:30:34 main.go:48: registering *bridge.Collector
Jan 06 15:30:34 compute-0 openstack_network_exporter[205258]: INFO    15:30:34 main.go:48: registering *coverage.Collector
Jan 06 15:30:34 compute-0 openstack_network_exporter[205258]: INFO    15:30:34 main.go:48: registering *datapath.Collector
Jan 06 15:30:34 compute-0 openstack_network_exporter[205258]: INFO    15:30:34 main.go:48: registering *iface.Collector
Jan 06 15:30:34 compute-0 openstack_network_exporter[205258]: INFO    15:30:34 main.go:48: registering *memory.Collector
Jan 06 15:30:34 compute-0 openstack_network_exporter[205258]: INFO    15:30:34 main.go:55: *ovnnorthd.Collector not registered, metric set not enabled
Jan 06 15:30:34 compute-0 openstack_network_exporter[205258]: INFO    15:30:34 main.go:48: registering *ovn.Collector
Jan 06 15:30:34 compute-0 openstack_network_exporter[205258]: INFO    15:30:34 main.go:55: *ovsdbserver.Collector not registered, metric set not enabled
Jan 06 15:30:34 compute-0 openstack_network_exporter[205258]: INFO    15:30:34 main.go:48: registering *pmd_perf.Collector
Jan 06 15:30:34 compute-0 openstack_network_exporter[205258]: INFO    15:30:34 main.go:48: registering *pmd_rxq.Collector
Jan 06 15:30:34 compute-0 openstack_network_exporter[205258]: INFO    15:30:34 main.go:48: registering *vswitch.Collector
Jan 06 15:30:34 compute-0 openstack_network_exporter[205258]: NOTICE  15:30:34 main.go:76: listening on https://:9105/metrics
Jan 06 15:30:34 compute-0 podman[205243]: 2026-01-06 15:30:34.299010953 +0000 UTC m=+0.181026872 container start 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-type=git, architecture=x86_64, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Jan 06 15:30:34 compute-0 podman[205243]: openstack_network_exporter
Jan 06 15:30:34 compute-0 systemd[1]: Started openstack_network_exporter container.
Jan 06 15:30:34 compute-0 sudo[205200]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:34 compute-0 podman[205268]: 2026-01-06 15:30:34.399829005 +0000 UTC m=+0.093414266 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 06 15:30:35 compute-0 python3.9[205440]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 06 15:30:35 compute-0 sudo[205590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnqpksggksgqasbopyxuzfocwrbzafls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713435.5359051-1106-169841327923136/AnsiballZ_stat.py'
Jan 06 15:30:35 compute-0 sudo[205590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:36 compute-0 python3.9[205592]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:30:36 compute-0 sudo[205590]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:36 compute-0 sudo[205715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xukuoymxvcsvsgiybdhtiwxvyhtfujyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713435.5359051-1106-169841327923136/AnsiballZ_copy.py'
Jan 06 15:30:36 compute-0 sudo[205715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:36 compute-0 python3.9[205717]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713435.5359051-1106-169841327923136/.source.yaml _original_basename=.z8ct3ni2 follow=False checksum=5c7c207948edba3988225ed81462adb5fa0d3355 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:30:36 compute-0 sudo[205715]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:37 compute-0 sudo[205867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axkeruxkjhdmrjcmptlgjlgxblpbjpfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713436.9812307-1121-135283147803820/AnsiballZ_find.py'
Jan 06 15:30:37 compute-0 sudo[205867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:37 compute-0 python3.9[205869]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 06 15:30:37 compute-0 sudo[205867]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:38 compute-0 sudo[206019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usxjoiepfyrtlnowvdcmivpgowiudswl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713437.933755-1131-158684418558396/AnsiballZ_podman_container_info.py'
Jan 06 15:30:38 compute-0 sudo[206019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:38 compute-0 python3.9[206021]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Jan 06 15:30:38 compute-0 sudo[206019]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:39 compute-0 sudo[206185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtqqonmvlfjawzsubxahgupvtayegtre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713438.715331-1139-56274198214316/AnsiballZ_podman_container_exec.py'
Jan 06 15:30:39 compute-0 sudo[206185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:39 compute-0 python3.9[206187]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:30:39 compute-0 systemd[1]: Started libpod-conmon-79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2.scope.
Jan 06 15:30:39 compute-0 podman[206188]: 2026-01-06 15:30:39.653333246 +0000 UTC m=+0.125818887 container exec 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 06 15:30:39 compute-0 podman[206188]: 2026-01-06 15:30:39.665523321 +0000 UTC m=+0.138008902 container exec_died 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 06 15:30:39 compute-0 sudo[206185]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:39 compute-0 systemd[1]: libpod-conmon-79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2.scope: Deactivated successfully.
Jan 06 15:30:40 compute-0 sudo[206370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhldzuzqtaeymgtnmhrabgnnbegsvtfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713439.954397-1147-9915667080793/AnsiballZ_podman_container_exec.py'
Jan 06 15:30:40 compute-0 sudo[206370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:40 compute-0 python3.9[206372]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:30:40 compute-0 systemd[1]: Started libpod-conmon-79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2.scope.
Jan 06 15:30:40 compute-0 podman[206373]: 2026-01-06 15:30:40.645273364 +0000 UTC m=+0.069523078 container exec 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:30:40 compute-0 podman[206373]: 2026-01-06 15:30:40.674173826 +0000 UTC m=+0.098423540 container exec_died 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2)
Jan 06 15:30:40 compute-0 systemd[1]: libpod-conmon-79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2.scope: Deactivated successfully.
Jan 06 15:30:40 compute-0 sudo[206370]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:41 compute-0 sudo[206554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amultrgdxlbrqibdoqnrasfgvnkzmufb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713440.9130538-1155-109972431303071/AnsiballZ_file.py'
Jan 06 15:30:41 compute-0 sudo[206554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:41 compute-0 python3.9[206556]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:30:41 compute-0 sudo[206554]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:42 compute-0 sudo[206716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vumvobzgcehstfnzqptrmnhhfmsnsrfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713441.7656994-1164-258453570182940/AnsiballZ_podman_container_info.py'
Jan 06 15:30:42 compute-0 sudo[206716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:42 compute-0 podman[206680]: 2026-01-06 15:30:42.213626997 +0000 UTC m=+0.109702474 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 15:30:42 compute-0 python3.9[206721]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Jan 06 15:30:42 compute-0 sudo[206716]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:43 compute-0 sudo[206896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flwkvmepznajhmgxmygwpkryiwgmsava ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713442.6519382-1172-233896673248008/AnsiballZ_podman_container_exec.py'
Jan 06 15:30:43 compute-0 sudo[206896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:43 compute-0 python3.9[206898]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:30:43 compute-0 systemd[1]: Started libpod-conmon-7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487.scope.
Jan 06 15:30:43 compute-0 podman[206899]: 2026-01-06 15:30:43.350902863 +0000 UTC m=+0.089510539 container exec 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 06 15:30:43 compute-0 podman[206899]: 2026-01-06 15:30:43.384779587 +0000 UTC m=+0.123387333 container exec_died 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 06 15:30:43 compute-0 systemd[1]: libpod-conmon-7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487.scope: Deactivated successfully.
Jan 06 15:30:43 compute-0 sudo[206896]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:43 compute-0 sudo[207080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkrgnpbcvlyrqpjlwwgrtkqrfejjjsix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713443.6649208-1180-188703750990704/AnsiballZ_podman_container_exec.py'
Jan 06 15:30:43 compute-0 sudo[207080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:44 compute-0 sshd-session[205239]: Connection closed by 3.134.148.59 port 47414 [preauth]
Jan 06 15:30:44 compute-0 python3.9[207082]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:30:44 compute-0 systemd[1]: Started libpod-conmon-7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487.scope.
Jan 06 15:30:44 compute-0 podman[207083]: 2026-01-06 15:30:44.309726315 +0000 UTC m=+0.085040908 container exec 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:30:44 compute-0 podman[207083]: 2026-01-06 15:30:44.340657873 +0000 UTC m=+0.115972466 container exec_died 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 06 15:30:44 compute-0 sudo[207080]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:44 compute-0 systemd[1]: libpod-conmon-7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487.scope: Deactivated successfully.
Jan 06 15:30:44 compute-0 sudo[207262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjftvstkfpfexsroxherbsehyizbzyvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713444.5929453-1188-252598816670880/AnsiballZ_file.py'
Jan 06 15:30:44 compute-0 sudo[207262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:45 compute-0 python3.9[207264]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:30:45 compute-0 sudo[207262]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:45 compute-0 sudo[207414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccmpgoqfuohbqddvhmsgbbuetilrnaqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713445.3726366-1197-48076792388125/AnsiballZ_podman_container_info.py'
Jan 06 15:30:45 compute-0 sudo[207414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:45 compute-0 python3.9[207416]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Jan 06 15:30:46 compute-0 sudo[207414]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:46 compute-0 sudo[207580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bybnschrenanlswlkfhlbwpridasllby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713446.2521243-1205-58432036134520/AnsiballZ_podman_container_exec.py'
Jan 06 15:30:46 compute-0 sudo[207580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:46 compute-0 python3.9[207582]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:30:46 compute-0 systemd[1]: Started libpod-conmon-3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32.scope.
Jan 06 15:30:46 compute-0 podman[207583]: 2026-01-06 15:30:46.89270915 +0000 UTC m=+0.105304745 container exec 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 06 15:30:46 compute-0 podman[207583]: 2026-01-06 15:30:46.928477476 +0000 UTC m=+0.141073021 container exec_died 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Jan 06 15:30:46 compute-0 systemd[1]: libpod-conmon-3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32.scope: Deactivated successfully.
Jan 06 15:30:46 compute-0 sudo[207580]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:47 compute-0 sudo[207763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpgratwjupbjdfplppiqjlpsklawvehl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713447.1923344-1213-149613184956179/AnsiballZ_podman_container_exec.py'
Jan 06 15:30:47 compute-0 sudo[207763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:47 compute-0 python3.9[207765]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:30:47 compute-0 systemd[1]: Started libpod-conmon-3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32.scope.
Jan 06 15:30:47 compute-0 podman[207766]: 2026-01-06 15:30:47.990590727 +0000 UTC m=+0.099754890 container exec 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 06 15:30:48 compute-0 podman[207766]: 2026-01-06 15:30:48.026722821 +0000 UTC m=+0.135886914 container exec_died 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:30:48 compute-0 systemd[1]: libpod-conmon-3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32.scope: Deactivated successfully.
Jan 06 15:30:48 compute-0 sudo[207763]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:48 compute-0 sudo[207947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzbuqxzliflrzioarhdxekeaiboblfyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713448.311734-1221-29437150025528/AnsiballZ_file.py'
Jan 06 15:30:48 compute-0 sudo[207947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:48 compute-0 python3.9[207949]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:30:48 compute-0 sudo[207947]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:49 compute-0 sudo[208099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjyqhvzstfsmhtrobvazfbvueohgtocr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713449.061961-1230-275975444350881/AnsiballZ_podman_container_info.py'
Jan 06 15:30:49 compute-0 sudo[208099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:49 compute-0 python3.9[208101]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Jan 06 15:30:49 compute-0 sudo[208099]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:50 compute-0 sudo[208265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gppvmwpdbvspbxrtscedhoqkjnmvpxek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713449.8572354-1238-262651507419284/AnsiballZ_podman_container_exec.py'
Jan 06 15:30:50 compute-0 sudo[208265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:50 compute-0 python3.9[208267]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:30:50 compute-0 systemd[1]: Started libpod-conmon-97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e.scope.
Jan 06 15:30:50 compute-0 podman[208268]: 2026-01-06 15:30:50.559105715 +0000 UTC m=+0.136429897 container exec 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 06 15:30:50 compute-0 podman[208268]: 2026-01-06 15:30:50.589951459 +0000 UTC m=+0.167275641 container exec_died 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 15:30:50 compute-0 systemd[1]: libpod-conmon-97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e.scope: Deactivated successfully.
Jan 06 15:30:50 compute-0 sudo[208265]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:51 compute-0 sudo[208470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-harojghdrvtwhgryzzczkxxhnadspvwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713450.8322148-1246-129641227668825/AnsiballZ_podman_container_exec.py'
Jan 06 15:30:51 compute-0 sudo[208470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:51 compute-0 podman[208424]: 2026-01-06 15:30:51.177256748 +0000 UTC m=+0.073992529 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 06 15:30:51 compute-0 podman[208423]: 2026-01-06 15:30:51.207731515 +0000 UTC m=+0.105045859 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 06 15:30:51 compute-0 python3.9[208483]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:30:51 compute-0 systemd[1]: Started libpod-conmon-97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e.scope.
Jan 06 15:30:51 compute-0 podman[208501]: 2026-01-06 15:30:51.496868232 +0000 UTC m=+0.087501073 container exec 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 06 15:30:51 compute-0 podman[208501]: 2026-01-06 15:30:51.528971486 +0000 UTC m=+0.119604297 container exec_died 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 15:30:51 compute-0 systemd[1]: libpod-conmon-97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e.scope: Deactivated successfully.
Jan 06 15:30:51 compute-0 sudo[208470]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:52 compute-0 sudo[208681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayduwvamlukkeolyhyrhxvkxrembhvlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713451.909917-1254-93661167081533/AnsiballZ_file.py'
Jan 06 15:30:52 compute-0 sudo[208681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:52 compute-0 python3.9[208683]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:30:52 compute-0 sudo[208681]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:53 compute-0 sudo[208833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnsteuorfnbteqohcsjtqhzjaiyumikj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713452.63572-1263-136518992614062/AnsiballZ_podman_container_info.py'
Jan 06 15:30:53 compute-0 sudo[208833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:53 compute-0 python3.9[208835]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Jan 06 15:30:53 compute-0 sudo[208833]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:30:53.668 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:30:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:30:53.670 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:30:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:30:53.671 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:30:53 compute-0 sudo[208999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kueeexiopgraqeeyhhzalctcxzzgrmpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713453.4997907-1271-109030103564553/AnsiballZ_podman_container_exec.py'
Jan 06 15:30:53 compute-0 sudo[208999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:54 compute-0 python3.9[209001]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:30:54 compute-0 systemd[1]: Started libpod-conmon-935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f.scope.
Jan 06 15:30:54 compute-0 podman[209002]: 2026-01-06 15:30:54.20292221 +0000 UTC m=+0.095618376 container exec 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 15:30:54 compute-0 podman[209002]: 2026-01-06 15:30:54.232986647 +0000 UTC m=+0.125682733 container exec_died 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 06 15:30:54 compute-0 sudo[208999]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:54 compute-0 systemd[1]: libpod-conmon-935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f.scope: Deactivated successfully.
Jan 06 15:30:54 compute-0 sudo[209183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxomgtymyoqxyaunbmifavdzpugjegrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713454.5184374-1279-192007746199357/AnsiballZ_podman_container_exec.py'
Jan 06 15:30:54 compute-0 sudo[209183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:55 compute-0 python3.9[209185]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:30:55 compute-0 systemd[1]: Started libpod-conmon-935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f.scope.
Jan 06 15:30:55 compute-0 podman[209186]: 2026-01-06 15:30:55.198792981 +0000 UTC m=+0.075924201 container exec 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 15:30:55 compute-0 podman[209186]: 2026-01-06 15:30:55.230065833 +0000 UTC m=+0.107196843 container exec_died 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 15:30:55 compute-0 sudo[209183]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:55 compute-0 systemd[1]: libpod-conmon-935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f.scope: Deactivated successfully.
Jan 06 15:30:55 compute-0 sudo[209366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxeuhqpqpwwfptrpbisqgziazdgjacar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713455.481722-1287-182308422034401/AnsiballZ_file.py'
Jan 06 15:30:55 compute-0 sudo[209366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:56 compute-0 python3.9[209368]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:30:56 compute-0 sudo[209366]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:56 compute-0 sudo[209518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqmexwcngkfvyltjrvyckyaweffbnlfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713456.3815258-1296-235581035735839/AnsiballZ_podman_container_info.py'
Jan 06 15:30:56 compute-0 sudo[209518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:56 compute-0 python3.9[209520]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Jan 06 15:30:56 compute-0 sudo[209518]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:57 compute-0 sudo[209683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsovuhgdcnrwxfglratnlvlmoxqajabr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713457.154904-1304-157372587257048/AnsiballZ_podman_container_exec.py'
Jan 06 15:30:57 compute-0 sudo[209683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:57 compute-0 python3.9[209685]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:30:57 compute-0 systemd[1]: Started libpod-conmon-6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4.scope.
Jan 06 15:30:57 compute-0 podman[209686]: 2026-01-06 15:30:57.91176099 +0000 UTC m=+0.116514662 container exec 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vcs-type=git, maintainer=Red Hat, Inc., io.openshift.expose-services=, config_id=openstack_network_exporter, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public)
Jan 06 15:30:57 compute-0 podman[209686]: 2026-01-06 15:30:57.946425743 +0000 UTC m=+0.151179395 container exec_died 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, architecture=x86_64, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, release=1755695350, build-date=2025-08-20T13:12:41)
Jan 06 15:30:57 compute-0 systemd[1]: libpod-conmon-6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4.scope: Deactivated successfully.
Jan 06 15:30:57 compute-0 sudo[209683]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:58 compute-0 sudo[209867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdqmiydlizipvkfkvyxrqxyfqfrlyhcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713458.144731-1312-135601298706506/AnsiballZ_podman_container_exec.py'
Jan 06 15:30:58 compute-0 sudo[209867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:58 compute-0 python3.9[209869]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:30:58 compute-0 systemd[1]: Started libpod-conmon-6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4.scope.
Jan 06 15:30:58 compute-0 podman[209870]: 2026-01-06 15:30:58.859293927 +0000 UTC m=+0.109348191 container exec 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 06 15:30:58 compute-0 podman[209870]: 2026-01-06 15:30:58.892685906 +0000 UTC m=+0.142740220 container exec_died 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.buildah.version=1.33.7, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=)
Jan 06 15:30:58 compute-0 systemd[1]: libpod-conmon-6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4.scope: Deactivated successfully.
Jan 06 15:30:58 compute-0 sudo[209867]: pam_unix(sudo:session): session closed for user root
Jan 06 15:30:59 compute-0 sudo[210052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plgtxecixdwghvyevjzmrwgwkjlgffem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713459.1387184-1320-250478618553129/AnsiballZ_file.py'
Jan 06 15:30:59 compute-0 sudo[210052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:30:59 compute-0 python3.9[210054]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:30:59 compute-0 sudo[210052]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:00 compute-0 sudo[210204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ektnlnnwdbhflsrazwwikeejhmcupsua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713459.8474848-1329-172985818833789/AnsiballZ_file.py'
Jan 06 15:31:00 compute-0 sudo[210204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:00 compute-0 python3.9[210206]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:00 compute-0 sudo[210204]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:01 compute-0 sudo[210356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlsnnlqwhkuuslfmltptftbzwelgrdbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713460.5055163-1337-110020957862843/AnsiballZ_stat.py'
Jan 06 15:31:01 compute-0 sudo[210356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:01 compute-0 podman[210358]: 2026-01-06 15:31:01.097582154 +0000 UTC m=+0.051970774 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 06 15:31:01 compute-0 python3.9[210359]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:31:01 compute-0 sudo[210356]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:01 compute-0 sudo[210499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhjrozfgznrwfxgfjsaczkmvurootuvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713460.5055163-1337-110020957862843/AnsiballZ_copy.py'
Jan 06 15:31:01 compute-0 sudo[210499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:01 compute-0 python3.9[210501]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713460.5055163-1337-110020957862843/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:01 compute-0 sudo[210499]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:02 compute-0 sudo[210651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybchovaeqqnplrjxcshgrcnbehokqqvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713462.0430288-1353-164154885701020/AnsiballZ_file.py'
Jan 06 15:31:02 compute-0 sudo[210651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:02 compute-0 python3.9[210653]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:02 compute-0 sudo[210651]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:02 compute-0 podman[210654]: 2026-01-06 15:31:02.851949112 +0000 UTC m=+0.123632151 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251224)
Jan 06 15:31:03 compute-0 sudo[210823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyrboneijybhbnuebaauzjumgnoajwrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713463.0517623-1361-279788090272930/AnsiballZ_stat.py'
Jan 06 15:31:03 compute-0 sudo[210823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:03 compute-0 python3.9[210825]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:31:03 compute-0 sudo[210823]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:04 compute-0 sudo[210901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxcfejbazsdkfkpdemcpytuxyrezzokp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713463.0517623-1361-279788090272930/AnsiballZ_file.py'
Jan 06 15:31:04 compute-0 sudo[210901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:04 compute-0 python3.9[210903]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:04 compute-0 sudo[210901]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:04 compute-0 podman[210980]: 2026-01-06 15:31:04.834554605 +0000 UTC m=+0.104163513 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-type=git, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41)
Jan 06 15:31:04 compute-0 sudo[211074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjhswijndflscnznqpiaytfpbneelnaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713464.6273775-1373-209027318737483/AnsiballZ_stat.py'
Jan 06 15:31:04 compute-0 sudo[211074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:05 compute-0 python3.9[211076]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:31:05 compute-0 sudo[211074]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:05 compute-0 sudo[211152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csothoxzclnstsjexzvhcynpunspkrur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713464.6273775-1373-209027318737483/AnsiballZ_file.py'
Jan 06 15:31:05 compute-0 sudo[211152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:05 compute-0 python3.9[211154]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2c0a1uk1 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:05 compute-0 sudo[211152]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:06 compute-0 sudo[211304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbvljljdqjlbbiwwzoxzvoidrxzkktem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713465.9597254-1385-166472367643993/AnsiballZ_stat.py'
Jan 06 15:31:06 compute-0 sudo[211304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:06 compute-0 python3.9[211306]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:31:06 compute-0 sudo[211304]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:06 compute-0 sudo[211382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etrzbcvfstlcwwmfiicamumpjvvfqdks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713465.9597254-1385-166472367643993/AnsiballZ_file.py'
Jan 06 15:31:06 compute-0 sudo[211382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:06 compute-0 python3.9[211384]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:07 compute-0 sudo[211382]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:07 compute-0 sudo[211534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xykzemlrdxjitwgjgyoixlcskgrugllc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713467.2459-1398-219685532855909/AnsiballZ_command.py'
Jan 06 15:31:07 compute-0 sudo[211534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:07 compute-0 python3.9[211536]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:31:07 compute-0 sudo[211534]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:08 compute-0 sudo[211687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irtokjcrisruurzkrpevlpmrituvwyhi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767713468.121037-1406-166483048071563/AnsiballZ_edpm_nftables_from_files.py'
Jan 06 15:31:08 compute-0 sudo[211687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:08 compute-0 python3[211689]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 06 15:31:08 compute-0 sudo[211687]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:09 compute-0 sudo[211839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyvlkqvsfazswbczscgzhkmchtsnsbxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713469.0328908-1414-105774113568308/AnsiballZ_stat.py'
Jan 06 15:31:09 compute-0 sudo[211839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:09 compute-0 python3.9[211841]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:31:09 compute-0 sudo[211839]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:10 compute-0 sudo[211917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvkrtiqdpbtriielpztrgnrkjjfwypfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713469.0328908-1414-105774113568308/AnsiballZ_file.py'
Jan 06 15:31:10 compute-0 sudo[211917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:10 compute-0 python3.9[211919]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:10 compute-0 sudo[211917]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:11 compute-0 sudo[212069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvqfbvztyxuyplznmgqarmabpwnrtdxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713470.5157511-1426-145212676151852/AnsiballZ_stat.py'
Jan 06 15:31:11 compute-0 sudo[212069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:11 compute-0 python3.9[212071]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:31:11 compute-0 sudo[212069]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:11 compute-0 sudo[212147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdzvtesfzogkiubrkolonoximlxldgcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713470.5157511-1426-145212676151852/AnsiballZ_file.py'
Jan 06 15:31:11 compute-0 sudo[212147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:11 compute-0 python3.9[212149]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:11 compute-0 sudo[212147]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:12 compute-0 sudo[212314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvssgfuxeukrsmyfokjclhzrmsnkawdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713471.9955413-1438-3838228392729/AnsiballZ_stat.py'
Jan 06 15:31:12 compute-0 podman[212273]: 2026-01-06 15:31:12.396816137 +0000 UTC m=+0.068323700 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 06 15:31:12 compute-0 sudo[212314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:12 compute-0 python3.9[212325]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:31:12 compute-0 sudo[212314]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:12 compute-0 sudo[212401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itqbxdyzppobwbkfevtvrdbkcnshejaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713471.9955413-1438-3838228392729/AnsiballZ_file.py'
Jan 06 15:31:12 compute-0 sudo[212401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:13 compute-0 python3.9[212403]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:13 compute-0 sudo[212401]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:13 compute-0 sudo[212553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svxzeocoxxvmyhvrveczajqviqrymdgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713473.2858784-1450-273383748587697/AnsiballZ_stat.py'
Jan 06 15:31:13 compute-0 sudo[212553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:13 compute-0 python3.9[212555]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:31:13 compute-0 sudo[212553]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:14 compute-0 sudo[212631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubdnikulvkdefxfzthjmaplsoiugxuyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713473.2858784-1450-273383748587697/AnsiballZ_file.py'
Jan 06 15:31:14 compute-0 sudo[212631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:14 compute-0 python3.9[212633]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:14 compute-0 sudo[212631]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:15 compute-0 sudo[212783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfioppzmnhjsnifvxzyzwwwrszzfngfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713474.6095624-1462-44228481194608/AnsiballZ_stat.py'
Jan 06 15:31:15 compute-0 sudo[212783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:15 compute-0 python3.9[212785]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:31:15 compute-0 sudo[212783]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:16 compute-0 sudo[212908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygvqljrexmrymiahatavardaqspegbca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713474.6095624-1462-44228481194608/AnsiballZ_copy.py'
Jan 06 15:31:16 compute-0 sudo[212908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:16 compute-0 python3.9[212910]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767713474.6095624-1462-44228481194608/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:16 compute-0 sudo[212908]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:16 compute-0 sudo[213060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gccujqothquzdvipueltsoqjlrxizaug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713476.4236023-1477-187801883431579/AnsiballZ_file.py'
Jan 06 15:31:16 compute-0 sudo[213060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:16 compute-0 python3.9[213062]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:16 compute-0 sudo[213060]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:17 compute-0 sudo[213212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cumkyquxuvxwbdpiwasrchmveeltmatd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713477.1435783-1485-65589074845374/AnsiballZ_command.py'
Jan 06 15:31:17 compute-0 sudo[213212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:17 compute-0 python3.9[213214]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:31:17 compute-0 sudo[213212]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:18 compute-0 sudo[213367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yexzpakcoeynfcopmacetkespkaqvgdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713477.8974395-1493-257574006751024/AnsiballZ_blockinfile.py'
Jan 06 15:31:18 compute-0 sudo[213367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:18 compute-0 python3.9[213369]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:18 compute-0 sudo[213367]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:19 compute-0 sudo[213519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhedukazmwzxsttoxbqgngybciqiyuyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713478.8143637-1502-155552647098021/AnsiballZ_command.py'
Jan 06 15:31:19 compute-0 sudo[213519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:19 compute-0 python3.9[213521]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:31:19 compute-0 sudo[213519]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:19 compute-0 sudo[213672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmexecheocmblpsckqnclqbuckvmlxcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713479.5959308-1510-6142798918799/AnsiballZ_stat.py'
Jan 06 15:31:19 compute-0 sudo[213672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:20 compute-0 python3.9[213674]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:31:20 compute-0 sudo[213672]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:20 compute-0 sudo[213826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dajdltttfhpbupxhpijrwfrfsvbmxhrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713480.320085-1518-130772814479961/AnsiballZ_command.py'
Jan 06 15:31:20 compute-0 sudo[213826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:20 compute-0 python3.9[213828]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:31:20 compute-0 sudo[213826]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:21 compute-0 sudo[214014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyqookvfdzownjpjdvipgwixxhatdrci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713481.0276427-1526-145249830885874/AnsiballZ_file.py'
Jan 06 15:31:21 compute-0 sudo[214014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:21 compute-0 podman[213956]: 2026-01-06 15:31:21.339093286 +0000 UTC m=+0.052657123 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 15:31:21 compute-0 podman[213955]: 2026-01-06 15:31:21.369987978 +0000 UTC m=+0.086518464 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 06 15:31:21 compute-0 python3.9[214027]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:21 compute-0 sudo[214014]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:21 compute-0 sshd-session[185857]: Connection closed by 192.168.122.30 port 36476
Jan 06 15:31:21 compute-0 sshd-session[185854]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:31:21 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Jan 06 15:31:21 compute-0 systemd[1]: session-26.scope: Consumed 2min 7.269s CPU time.
Jan 06 15:31:21 compute-0 systemd-logind[791]: Session 26 logged out. Waiting for processes to exit.
Jan 06 15:31:21 compute-0 systemd-logind[791]: Removed session 26.
Jan 06 15:31:27 compute-0 nova_compute[185513]: 2026-01-06 15:31:27.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:31:27 compute-0 sshd-session[214060]: Accepted publickey for zuul from 192.168.122.30 port 47690 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 15:31:27 compute-0 systemd-logind[791]: New session 27 of user zuul.
Jan 06 15:31:27 compute-0 systemd[1]: Started Session 27 of User zuul.
Jan 06 15:31:27 compute-0 sshd-session[214060]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:31:28 compute-0 nova_compute[185513]: 2026-01-06 15:31:28.019 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:31:28 compute-0 nova_compute[185513]: 2026-01-06 15:31:28.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:31:28 compute-0 nova_compute[185513]: 2026-01-06 15:31:28.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:31:28 compute-0 nova_compute[185513]: 2026-01-06 15:31:28.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:31:28 compute-0 nova_compute[185513]: 2026-01-06 15:31:28.152 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:31:28 compute-0 nova_compute[185513]: 2026-01-06 15:31:28.153 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:31:28 compute-0 nova_compute[185513]: 2026-01-06 15:31:28.153 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:31:28 compute-0 nova_compute[185513]: 2026-01-06 15:31:28.153 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:31:28 compute-0 nova_compute[185513]: 2026-01-06 15:31:28.356 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:31:28 compute-0 nova_compute[185513]: 2026-01-06 15:31:28.358 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5826MB free_disk=72.48310852050781GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:31:28 compute-0 nova_compute[185513]: 2026-01-06 15:31:28.358 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:31:28 compute-0 nova_compute[185513]: 2026-01-06 15:31:28.359 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:31:28 compute-0 nova_compute[185513]: 2026-01-06 15:31:28.417 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:31:28 compute-0 nova_compute[185513]: 2026-01-06 15:31:28.417 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:31:28 compute-0 nova_compute[185513]: 2026-01-06 15:31:28.464 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:31:28 compute-0 nova_compute[185513]: 2026-01-06 15:31:28.477 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:31:28 compute-0 nova_compute[185513]: 2026-01-06 15:31:28.479 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:31:28 compute-0 nova_compute[185513]: 2026-01-06 15:31:28.480 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:31:28 compute-0 sudo[214213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgxgojqdperkxxhvcslpucwlrvtinhcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713488.0248566-19-264039974725193/AnsiballZ_systemd_service.py'
Jan 06 15:31:28 compute-0 sudo[214213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:29 compute-0 python3.9[214215]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 06 15:31:29 compute-0 systemd[1]: Reloading.
Jan 06 15:31:29 compute-0 systemd-sysv-generator[214241]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:31:29 compute-0 systemd-rc-local-generator[214238]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:31:29 compute-0 nova_compute[185513]: 2026-01-06 15:31:29.481 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:31:29 compute-0 nova_compute[185513]: 2026-01-06 15:31:29.482 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:31:29 compute-0 nova_compute[185513]: 2026-01-06 15:31:29.483 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:31:29 compute-0 sudo[214213]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:29 compute-0 podman[201918]: time="2026-01-06T15:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:31:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21257 "" "Go-http-client/1.1"
Jan 06 15:31:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2999 "" "Go-http-client/1.1"
Jan 06 15:31:30 compute-0 nova_compute[185513]: 2026-01-06 15:31:30.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:31:30 compute-0 nova_compute[185513]: 2026-01-06 15:31:30.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:31:30 compute-0 nova_compute[185513]: 2026-01-06 15:31:30.026 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:31:30 compute-0 nova_compute[185513]: 2026-01-06 15:31:30.051 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:31:30 compute-0 python3.9[214402]: ansible-ansible.builtin.service_facts Invoked
Jan 06 15:31:30 compute-0 network[214419]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 06 15:31:30 compute-0 network[214420]: 'network-scripts' will be removed from distribution in near future.
Jan 06 15:31:30 compute-0 network[214421]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 06 15:31:31 compute-0 nova_compute[185513]: 2026-01-06 15:31:31.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:31:31 compute-0 openstack_network_exporter[205258]: ERROR   15:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:31:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:31:31 compute-0 openstack_network_exporter[205258]: ERROR   15:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:31:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:31:31 compute-0 podman[214428]: 2026-01-06 15:31:31.45403439 +0000 UTC m=+0.095488562 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 06 15:31:33 compute-0 podman[214523]: 2026-01-06 15:31:33.005113439 +0000 UTC m=+0.098159804 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.064 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.065 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca63a600>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:31:33.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:31:34 compute-0 podman[214581]: 2026-01-06 15:31:34.947289425 +0000 UTC m=+0.067717683 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41)
Jan 06 15:31:35 compute-0 sudo[214755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsmqekvwwnhguotzbktlhdualplvjkbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713495.3313854-42-148563063888496/AnsiballZ_systemd_service.py'
Jan 06 15:31:35 compute-0 sudo[214755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:36 compute-0 python3.9[214757]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:31:36 compute-0 sudo[214755]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:36 compute-0 sudo[214908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buvyptdbazfsoeazbtncfeklixfukxnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713496.3952625-52-114125678897615/AnsiballZ_file.py'
Jan 06 15:31:36 compute-0 sudo[214908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:37 compute-0 python3.9[214910]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:37 compute-0 sudo[214908]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:37 compute-0 sudo[215060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuqmjsqebktyrtkefjtedrrnglubeqyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713497.340317-60-255126331852856/AnsiballZ_file.py'
Jan 06 15:31:37 compute-0 sudo[215060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:37 compute-0 python3.9[215062]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:37 compute-0 sudo[215060]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:38 compute-0 sudo[215212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryhcxkmelnvkkfeczznqizrezvynlxdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713498.3034024-69-263920252598650/AnsiballZ_command.py'
Jan 06 15:31:38 compute-0 sudo[215212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:38 compute-0 python3.9[215214]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:31:39 compute-0 sudo[215212]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:39 compute-0 python3.9[215366]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 06 15:31:40 compute-0 sudo[215516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opbkwqpuzovqhxalfixldpithrorhpdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713500.22118-87-90832490330097/AnsiballZ_systemd_service.py'
Jan 06 15:31:40 compute-0 sudo[215516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:40 compute-0 python3.9[215518]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 06 15:31:40 compute-0 systemd[1]: Reloading.
Jan 06 15:31:40 compute-0 systemd-rc-local-generator[215544]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:31:40 compute-0 systemd-sysv-generator[215547]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:31:41 compute-0 sudo[215516]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:41 compute-0 sudo[215704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laxzlvkprdcomsfxjtsqdyuniiiqpphp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713501.3482616-95-239428696593955/AnsiballZ_command.py'
Jan 06 15:31:41 compute-0 sudo[215704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:41 compute-0 python3.9[215706]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:31:41 compute-0 sudo[215704]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:42 compute-0 sudo[215870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeqvbhfcesarpnyhuidlrnhdjsmpsrgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713502.257945-104-98010350914109/AnsiballZ_file.py'
Jan 06 15:31:42 compute-0 sudo[215870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:42 compute-0 podman[215831]: 2026-01-06 15:31:42.642806904 +0000 UTC m=+0.095404810 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 15:31:42 compute-0 python3.9[215880]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:31:42 compute-0 sudo[215870]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:43 compute-0 python3.9[216033]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:31:44 compute-0 python3.9[216185]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:31:45 compute-0 python3.9[216306]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767713504.1468058-120-218595210463561/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:31:46 compute-0 python3.9[216456]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:31:46 compute-0 python3.9[216577]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767713505.7258713-135-189378129833174/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:31:47 compute-0 sudo[216727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zohyqzeblvcgttzwgahdbldwldqmhmay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713507.2985396-153-215243818188636/AnsiballZ_getent.py'
Jan 06 15:31:47 compute-0 sudo[216727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:47 compute-0 python3.9[216729]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Jan 06 15:31:47 compute-0 sudo[216727]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:49 compute-0 python3.9[216880]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:31:50 compute-0 python3.9[217001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767713508.7096498-181-277704563675762/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:50 compute-0 python3.9[217151]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:31:51 compute-0 python3.9[217272]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767713510.2538567-181-51723376517105/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:51 compute-0 podman[217273]: 2026-01-06 15:31:51.482876991 +0000 UTC m=+0.072552231 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 06 15:31:51 compute-0 podman[217274]: 2026-01-06 15:31:51.549477263 +0000 UTC m=+0.144548127 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:31:52 compute-0 python3.9[217473]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:31:52 compute-0 python3.9[217594]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767713511.7811263-181-198213498242696/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:53 compute-0 python3.9[217744]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:31:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:31:53.669 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:31:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:31:53.672 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:31:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:31:53.672 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:31:54 compute-0 python3.9[217896]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:31:55 compute-0 python3.9[218048]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:31:55 compute-0 python3.9[218169]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713514.4360092-240-120456524411402/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:56 compute-0 sudo[218319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hayxgsdwqrzwxyiifquddubicnrjrzcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713515.8667996-255-86260329853983/AnsiballZ_file.py'
Jan 06 15:31:56 compute-0 sudo[218319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:56 compute-0 python3.9[218321]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:56 compute-0 sudo[218319]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:57 compute-0 sudo[218471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azxpjdasezmmmwbkykesvzjkewivwzkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713516.7264442-263-173791641977326/AnsiballZ_file.py'
Jan 06 15:31:57 compute-0 sudo[218471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:57 compute-0 python3.9[218473]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:31:57 compute-0 sudo[218471]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:57 compute-0 sudo[218623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwzssrolodjyurybjlmqgwwxgtyqccea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713517.5419538-271-259053615233802/AnsiballZ_file.py'
Jan 06 15:31:57 compute-0 sudo[218623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:58 compute-0 python3.9[218625]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:31:58 compute-0 sudo[218623]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:58 compute-0 sudo[218775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvvamunusnvgehgwvuphxxhwmukuuiql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713518.3272085-279-34549852217091/AnsiballZ_stat.py'
Jan 06 15:31:58 compute-0 sudo[218775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:58 compute-0 python3.9[218777]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:31:58 compute-0 sudo[218775]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:59 compute-0 sudo[218898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymhfhbesylnbhhlvscnbslqiycdzpisq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713518.3272085-279-34549852217091/AnsiballZ_copy.py'
Jan 06 15:31:59 compute-0 sudo[218898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:31:59 compute-0 python3.9[218900]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767713518.3272085-279-34549852217091/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:31:59 compute-0 sudo[218898]: pam_unix(sudo:session): session closed for user root
Jan 06 15:31:59 compute-0 podman[201918]: time="2026-01-06T15:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:31:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21257 "" "Go-http-client/1.1"
Jan 06 15:31:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3001 "" "Go-http-client/1.1"
Jan 06 15:31:59 compute-0 sudo[218976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibzonsigfozjbwnhxzogpnrymoifxnmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713518.3272085-279-34549852217091/AnsiballZ_stat.py'
Jan 06 15:31:59 compute-0 sudo[218976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:00 compute-0 python3.9[218978]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:32:00 compute-0 sudo[218976]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:00 compute-0 sudo[219099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exgrhxmyxozcwszofbvqcailcwlvqwkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713518.3272085-279-34549852217091/AnsiballZ_copy.py'
Jan 06 15:32:00 compute-0 sudo[219099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:00 compute-0 python3.9[219101]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767713518.3272085-279-34549852217091/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:32:00 compute-0 sudo[219099]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:01 compute-0 openstack_network_exporter[205258]: ERROR   15:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:32:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:32:01 compute-0 openstack_network_exporter[205258]: ERROR   15:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:32:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:32:01 compute-0 sudo[219251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aimrodvrcmuycxcwlpliulwojktalmus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713520.9799225-279-211743934132946/AnsiballZ_stat.py'
Jan 06 15:32:01 compute-0 sudo[219251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:01 compute-0 python3.9[219253]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:32:01 compute-0 sudo[219251]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:01 compute-0 podman[219272]: 2026-01-06 15:32:01.831051648 +0000 UTC m=+0.082533157 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 06 15:32:02 compute-0 sudo[219393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxmmmraavzujqxmpoelfnfuibunywvsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713520.9799225-279-211743934132946/AnsiballZ_copy.py'
Jan 06 15:32:02 compute-0 sudo[219393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:02 compute-0 python3.9[219395]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767713520.9799225-279-211743934132946/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:32:02 compute-0 sudo[219393]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:02 compute-0 sudo[219545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stktduncmuwdjzslkrmbfwgqjszznwly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713522.6456914-321-261556390146633/AnsiballZ_file.py'
Jan 06 15:32:02 compute-0 sudo[219545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:03 compute-0 python3.9[219547]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:32:03 compute-0 sudo[219545]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:03 compute-0 podman[219548]: 2026-01-06 15:32:03.256006173 +0000 UTC m=+0.054062492 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 06 15:32:03 compute-0 sudo[219718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfanaxepipwfzocftoftdefthlurmrjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713523.4007547-329-70745605703074/AnsiballZ_file.py'
Jan 06 15:32:03 compute-0 sudo[219718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:03 compute-0 python3.9[219720]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:32:03 compute-0 sudo[219718]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:04 compute-0 sudo[219870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyizhyootsrzyigrjpsysnpwlhpfldiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713524.1433344-337-241564317542479/AnsiballZ_stat.py'
Jan 06 15:32:04 compute-0 sudo[219870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:04 compute-0 python3.9[219872]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:32:04 compute-0 sudo[219870]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:05 compute-0 sudo[220004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqsjoagqlpxojedfjwpikbrttrnaspxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713524.1433344-337-241564317542479/AnsiballZ_copy.py'
Jan 06 15:32:05 compute-0 sudo[220004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:05 compute-0 podman[219967]: 2026-01-06 15:32:05.147201207 +0000 UTC m=+0.099808785 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal, distribution-scope=public, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Jan 06 15:32:05 compute-0 python3.9[220010]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713524.1433344-337-241564317542479/.source.json _original_basename=.0orbqzw_ follow=False checksum=fa47598aea39469905a43b7b570ec2fd120965fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:32:05 compute-0 sudo[220004]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:06 compute-0 python3.9[220166]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:32:08 compute-0 sudo[220587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfgsmptpougryuwmykyvbrdlvdssvggo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713527.7597117-377-173274636386075/AnsiballZ_container_config_data.py'
Jan 06 15:32:08 compute-0 sudo[220587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:08 compute-0 python3.9[220589]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_pattern=*.json debug=False
Jan 06 15:32:08 compute-0 sudo[220587]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:09 compute-0 sudo[220739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krmeehjxllhmjvfpwxvfrmurjbglbwvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713528.9139702-388-244603079048157/AnsiballZ_container_config_hash.py'
Jan 06 15:32:09 compute-0 sudo[220739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:09 compute-0 python3.9[220741]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 06 15:32:09 compute-0 sudo[220739]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:10 compute-0 sudo[220891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmkidgxejcklxidbrtqoumiwzvhbmdpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713529.8812544-397-21060426434782/AnsiballZ_podman_container_info.py'
Jan 06 15:32:10 compute-0 sudo[220891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:10 compute-0 python3.9[220893]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 06 15:32:10 compute-0 sudo[220891]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:12 compute-0 sudo[221070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymtkukttyjegbvnjmpyifwjmueefdwrd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767713531.4675207-410-133557795324983/AnsiballZ_edpm_container_manage.py'
Jan 06 15:32:12 compute-0 sudo[221070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:12 compute-0 python3[221072]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_id=ceilometer_agent_ipmi config_overrides={} config_patterns=*.json containers=['ceilometer_agent_ipmi'] log_base_path=/var/log/containers/stdouts debug=False
Jan 06 15:32:12 compute-0 podman[221111]: 2026-01-06 15:32:12.624343436 +0000 UTC m=+0.067621282 container create 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 06 15:32:12 compute-0 podman[221111]: 2026-01-06 15:32:12.588653661 +0000 UTC m=+0.031931547 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Jan 06 15:32:12 compute-0 python3[221072]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e --healthcheck-command /openstack/healthcheck ipmi --label config_id=ceilometer_agent_ipmi --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Jan 06 15:32:12 compute-0 podman[221137]: 2026-01-06 15:32:12.799215957 +0000 UTC m=+0.066838881 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 15:32:12 compute-0 sudo[221070]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:13 compute-0 sudo[221321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfmftdjsfumcppoawgasyxhlhtmyfdcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713533.078588-418-19111602703906/AnsiballZ_stat.py'
Jan 06 15:32:13 compute-0 sudo[221321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:13 compute-0 python3.9[221323]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:32:13 compute-0 sudo[221321]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:14 compute-0 sudo[221475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcspcupcjprmzvlilwnpaehulmvrhgpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713534.018987-427-202554821693477/AnsiballZ_file.py'
Jan 06 15:32:14 compute-0 sudo[221475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:14 compute-0 python3.9[221477]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:32:14 compute-0 sudo[221475]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:14 compute-0 sudo[221551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhvsunrfwvpxfwxaoejimwfozzztgozr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713534.018987-427-202554821693477/AnsiballZ_stat.py'
Jan 06 15:32:14 compute-0 sudo[221551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:15 compute-0 python3.9[221553]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:32:15 compute-0 sudo[221551]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:15 compute-0 sudo[221702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlhpwtqfwlfihvcuehlyaflqkxslrpnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713535.231881-427-34202213432387/AnsiballZ_copy.py'
Jan 06 15:32:15 compute-0 sudo[221702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:16 compute-0 python3.9[221704]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767713535.231881-427-34202213432387/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:32:16 compute-0 sudo[221702]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:16 compute-0 sudo[221778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxpabnxkpqcadksakyyjldlznljzsolu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713535.231881-427-34202213432387/AnsiballZ_systemd.py'
Jan 06 15:32:16 compute-0 sudo[221778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:17 compute-0 python3.9[221780]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 06 15:32:17 compute-0 systemd[1]: Reloading.
Jan 06 15:32:17 compute-0 systemd-rc-local-generator[221808]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:32:17 compute-0 systemd-sysv-generator[221811]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:32:17 compute-0 sudo[221778]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:17 compute-0 sudo[221889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwodaoluopxvqeuhvdjeznkfbrmdeylm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713535.231881-427-34202213432387/AnsiballZ_systemd.py'
Jan 06 15:32:17 compute-0 sudo[221889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:18 compute-0 python3.9[221891]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:32:18 compute-0 systemd[1]: Reloading.
Jan 06 15:32:18 compute-0 systemd-rc-local-generator[221920]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:32:18 compute-0 systemd-sysv-generator[221923]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:32:18 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Jan 06 15:32:18 compute-0 systemd[1]: Started libcrun container.
Jan 06 15:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cfe3b9321bb9c82b468bd2ce6ddc12d6d11514381229187e74af412fab1001/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 06 15:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cfe3b9321bb9c82b468bd2ce6ddc12d6d11514381229187e74af412fab1001/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 06 15:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cfe3b9321bb9c82b468bd2ce6ddc12d6d11514381229187e74af412fab1001/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 06 15:32:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cfe3b9321bb9c82b468bd2ce6ddc12d6d11514381229187e74af412fab1001/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 06 15:32:18 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04.
Jan 06 15:32:18 compute-0 podman[221932]: 2026-01-06 15:32:18.725086945 +0000 UTC m=+0.182133304 container init 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: + sudo -E kolla_set_configs
Jan 06 15:32:18 compute-0 sudo[221953]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 06 15:32:18 compute-0 sudo[221953]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 06 15:32:18 compute-0 sudo[221953]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 06 15:32:18 compute-0 podman[221932]: 2026-01-06 15:32:18.779964478 +0000 UTC m=+0.237010837 container start 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 06 15:32:18 compute-0 podman[221932]: ceilometer_agent_ipmi
Jan 06 15:32:18 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: INFO:__main__:Validating config file
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: INFO:__main__:Copying service configuration files
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 06 15:32:18 compute-0 sudo[221889]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: INFO:__main__:Writing out command to execute
Jan 06 15:32:18 compute-0 sudo[221953]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: ++ cat /run_command
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: + ARGS=
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: + sudo kolla_copy_cacerts
Jan 06 15:32:18 compute-0 sudo[221969]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 06 15:32:18 compute-0 sudo[221969]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 06 15:32:18 compute-0 sudo[221969]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 06 15:32:18 compute-0 sudo[221969]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: + [[ ! -n '' ]]
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: + . kolla_extend_start
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: + umask 0022
Jan 06 15:32:18 compute-0 ceilometer_agent_ipmi[221947]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Jan 06 15:32:18 compute-0 podman[221954]: 2026-01-06 15:32:18.853442814 +0000 UTC m=+0.053910549 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:32:18 compute-0 systemd[1]: 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04-69bc0174b93b95e5.service: Main process exited, code=exited, status=1/FAILURE
Jan 06 15:32:18 compute-0 systemd[1]: 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04-69bc0174b93b95e5.service: Failed with result 'exit-code'.
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.697 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.697 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.697 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.697 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.697 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.698 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.698 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.698 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.698 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.698 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.698 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.698 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.698 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.698 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.698 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.699 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.699 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.699 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.699 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.699 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.699 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.699 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.699 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.699 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.699 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.699 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.699 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.700 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.700 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.700 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.700 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.700 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.700 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.700 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.700 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.700 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.700 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.700 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.700 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.700 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.701 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.701 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.701 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.701 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.701 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.701 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.701 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.701 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.701 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.701 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.701 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.702 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.702 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.702 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.702 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.702 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.702 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.702 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.702 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.702 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.702 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.702 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.702 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.703 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.703 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.703 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.703 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.703 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.703 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.703 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.703 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.703 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.703 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.703 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.704 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.705 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.705 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.706 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.706 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.706 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.706 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.706 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.706 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.706 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.706 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.706 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.706 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.707 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.707 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.707 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.707 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.707 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.707 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.707 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.707 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.707 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.707 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.707 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.707 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.708 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.708 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.708 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.708 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.709 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.709 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.709 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.710 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.710 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.710 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.710 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.711 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.711 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.733 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.735 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.737 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 06 15:32:19 compute-0 python3.9[222127]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 06 15:32:19 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:19.866 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpchc0zsw3/privsep.sock']
Jan 06 15:32:19 compute-0 sudo[222132]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpchc0zsw3/privsep.sock
Jan 06 15:32:19 compute-0 sudo[222132]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 06 15:32:19 compute-0 sudo[222132]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 06 15:32:20 compute-0 sudo[222132]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.605 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.605 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpchc0zsw3/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.440 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.446 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.449 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.449 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.710 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.711 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.713 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.713 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.713 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.713 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.713 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.713 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.714 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.714 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.714 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.714 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.714 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.722 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.722 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.723 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.723 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.723 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.723 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.723 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.724 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.724 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.724 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.724 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.725 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.725 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.726 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.726 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.726 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.727 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.727 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.727 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.727 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.728 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.728 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.728 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.728 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.728 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.729 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.729 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.729 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.729 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.730 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.730 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.730 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.730 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.730 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.730 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.731 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.731 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.731 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.731 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.731 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.732 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.732 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.732 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.732 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.732 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.733 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.733 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.733 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.733 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.734 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.734 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.734 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.734 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.734 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.735 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.735 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.735 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.736 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.736 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.736 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.736 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.736 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.737 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.737 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.737 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.737 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.738 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.738 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.738 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.738 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.739 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.739 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.739 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.740 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.740 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.740 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.740 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.741 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.741 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.741 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.742 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.742 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.742 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.743 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.743 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.743 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.743 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.743 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.744 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.744 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.744 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.744 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.745 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.745 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.745 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.745 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.745 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.746 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.746 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.746 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.747 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.747 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.747 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.748 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.748 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.748 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.749 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.749 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.749 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.749 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.750 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.750 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.750 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.750 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.750 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.751 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.751 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.751 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.751 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.752 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.752 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.752 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.752 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.753 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.753 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.753 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.753 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.753 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.754 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.754 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.754 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.754 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.754 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.754 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.754 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.754 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.754 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.755 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.755 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.755 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.755 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.755 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.755 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.755 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.755 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.756 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.756 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.756 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.756 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.756 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.756 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.756 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.757 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.757 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.757 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.757 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.757 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.757 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.758 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.758 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.758 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.758 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.758 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.758 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.758 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.758 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.758 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.759 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.759 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.759 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.759 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.759 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.759 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.759 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.759 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.760 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.760 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.760 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.760 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.760 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.760 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.760 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.761 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.761 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.761 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.761 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.761 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.761 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.761 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.762 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.762 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.762 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.762 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.762 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.762 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.762 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.763 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.763 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.763 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.763 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Jan 06 15:32:20 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:20.766 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Jan 06 15:32:20 compute-0 sudo[222289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tayhgsacdsffttxpipvgckfwpjenlavr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713540.2475903-468-28807625650144/AnsiballZ_stat.py'
Jan 06 15:32:20 compute-0 sudo[222289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:20 compute-0 python3.9[222292]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:32:21 compute-0 sudo[222289]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:21 compute-0 sudo[222415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgkptthhnbeyqjafpxoikcfyyafwoqqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713540.2475903-468-28807625650144/AnsiballZ_copy.py'
Jan 06 15:32:21 compute-0 sudo[222415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:21 compute-0 podman[222417]: 2026-01-06 15:32:21.63178521 +0000 UTC m=+0.087304023 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 15:32:21 compute-0 python3.9[222418]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713540.2475903-468-28807625650144/.source.yaml _original_basename=.fxozuhlv follow=False checksum=28dee4da2b5909421cb20e4a30577a17238d736e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:32:21 compute-0 sudo[222415]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:21 compute-0 podman[222443]: 2026-01-06 15:32:21.811118109 +0000 UTC m=+0.132657154 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:32:22 compute-0 sudo[222617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbmvijdwjyuyytstyvtyawlmnnmxnxfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713542.0984685-485-275726174320942/AnsiballZ_file.py'
Jan 06 15:32:22 compute-0 sudo[222617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:22 compute-0 python3.9[222619]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:32:22 compute-0 sudo[222617]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:23 compute-0 sudo[222769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htpetvpdofzktfvqtkvzayxhcauppnft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713542.8502584-493-243230219878848/AnsiballZ_file.py'
Jan 06 15:32:23 compute-0 sudo[222769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:23 compute-0 python3.9[222771]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 06 15:32:23 compute-0 sudo[222769]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:24 compute-0 python3.9[222921]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/kepler state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:32:26 compute-0 sudo[223342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fngxdvangljahkfsppydkmqooxbqnuuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713546.0463603-527-235166040200458/AnsiballZ_container_config_data.py'
Jan 06 15:32:26 compute-0 sudo[223342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:26 compute-0 python3.9[223344]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/kepler config_pattern=*.json debug=False
Jan 06 15:32:26 compute-0 sudo[223342]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:27 compute-0 nova_compute[185513]: 2026-01-06 15:32:27.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:32:27 compute-0 sudo[223494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esedvjmlikpyuivxloybvulaiwaalyqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713547.0011885-538-257682964479711/AnsiballZ_container_config_hash.py'
Jan 06 15:32:27 compute-0 sudo[223494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:27 compute-0 python3.9[223496]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 06 15:32:27 compute-0 sudo[223494]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:28 compute-0 nova_compute[185513]: 2026-01-06 15:32:28.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:32:28 compute-0 sudo[223646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbdwfysqovcxfwmximtfopixlnaqpywe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713547.9407902-547-223160314460624/AnsiballZ_podman_container_info.py'
Jan 06 15:32:28 compute-0 sudo[223646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:28 compute-0 python3.9[223648]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 06 15:32:28 compute-0 sudo[223646]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:29 compute-0 podman[201918]: time="2026-01-06T15:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:32:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 24317 "" "Go-http-client/1.1"
Jan 06 15:32:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3448 "" "Go-http-client/1.1"
Jan 06 15:32:29 compute-0 sudo[223825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guhghtwtxmirdybalklhkzpgwmtbtlhc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767713549.5631309-560-235141181133677/AnsiballZ_edpm_container_manage.py'
Jan 06 15:32:29 compute-0 sudo[223825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.022 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.022 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.036 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.037 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.037 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.037 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.037 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.061 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.061 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.061 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.061 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:32:30 compute-0 python3[223827]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/kepler config_id=kepler config_overrides={} config_patterns=*.json containers=['kepler'] log_base_path=/var/log/containers/stdouts debug=False
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.225 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.226 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5749MB free_disk=72.48088836669922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.226 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.226 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.305 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.307 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.329 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.344 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.346 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:32:30 compute-0 nova_compute[185513]: 2026-01-06 15:32:30.346 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:32:30 compute-0 podman[223863]: 2026-01-06 15:32:30.352292016 +0000 UTC m=+0.054614528 container create f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, container_name=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.4, release-0.7.12=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543)
Jan 06 15:32:30 compute-0 podman[223863]: 2026-01-06 15:32:30.319410295 +0000 UTC m=+0.021732787 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Jan 06 15:32:30 compute-0 python3[223827]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_CONTAINER_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env EXPOSE_VM_METRICS=true --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=kepler --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Jan 06 15:32:30 compute-0 sudo[223825]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:30 compute-0 sudo[224051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rahscwhojkmgivhxnxkurrrxkunhtwvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713550.6784792-568-38434122203943/AnsiballZ_stat.py'
Jan 06 15:32:30 compute-0 sudo[224051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:31 compute-0 python3.9[224053]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:32:31 compute-0 sudo[224051]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:31 compute-0 nova_compute[185513]: 2026-01-06 15:32:31.332 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:32:31 compute-0 nova_compute[185513]: 2026-01-06 15:32:31.333 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:32:31 compute-0 openstack_network_exporter[205258]: ERROR   15:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:32:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:32:31 compute-0 openstack_network_exporter[205258]: ERROR   15:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:32:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:32:31 compute-0 sudo[224205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unusxpdbleoviwmuztkvhqycyzrhlvvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713551.5374553-577-46154087731195/AnsiballZ_file.py'
Jan 06 15:32:31 compute-0 sudo[224205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:31 compute-0 podman[224207]: 2026-01-06 15:32:31.997269967 +0000 UTC m=+0.082292430 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 06 15:32:32 compute-0 nova_compute[185513]: 2026-01-06 15:32:32.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:32:32 compute-0 python3.9[224208]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:32:32 compute-0 sudo[224205]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:32 compute-0 sudo[224300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uanymeysczwguribudsxczazeniuxpcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713551.5374553-577-46154087731195/AnsiballZ_stat.py'
Jan 06 15:32:32 compute-0 sudo[224300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:32 compute-0 python3.9[224302]: ansible-stat Invoked with path=/etc/systemd/system/edpm_kepler_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:32:32 compute-0 sudo[224300]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:33 compute-0 sudo[224451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bepunjwoucqdksjgonenylfjxlmnhxna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713552.644137-577-205327516216335/AnsiballZ_copy.py'
Jan 06 15:32:33 compute-0 sudo[224451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:33 compute-0 python3.9[224453]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767713552.644137-577-205327516216335/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:32:33 compute-0 sudo[224451]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:33 compute-0 sudo[224543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bggyryjprgsfgokrlpgtfchixwrokzco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713552.644137-577-205327516216335/AnsiballZ_systemd.py'
Jan 06 15:32:33 compute-0 podman[224501]: 2026-01-06 15:32:33.670667622 +0000 UTC m=+0.066699637 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute)
Jan 06 15:32:33 compute-0 sudo[224543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:33 compute-0 python3.9[224546]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 06 15:32:33 compute-0 systemd[1]: Reloading.
Jan 06 15:32:34 compute-0 systemd-sysv-generator[224574]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:32:34 compute-0 systemd-rc-local-generator[224571]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:32:34 compute-0 sudo[224543]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:34 compute-0 sudo[224655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozkubpzmezoxplqlykrlixhujcyiecca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713552.644137-577-205327516216335/AnsiballZ_systemd.py'
Jan 06 15:32:34 compute-0 sudo[224655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:34 compute-0 python3.9[224657]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 06 15:32:35 compute-0 podman[224659]: 2026-01-06 15:32:35.801550692 +0000 UTC m=+0.073288262 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, release=1755695350, config_id=openstack_network_exporter, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 15:32:35 compute-0 systemd[1]: Reloading.
Jan 06 15:32:36 compute-0 systemd-rc-local-generator[224705]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 06 15:32:36 compute-0 systemd-sysv-generator[224709]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 06 15:32:36 compute-0 systemd[1]: Starting kepler container...
Jan 06 15:32:36 compute-0 systemd[1]: Started libcrun container.
Jan 06 15:32:36 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca.
Jan 06 15:32:36 compute-0 podman[224718]: 2026-01-06 15:32:36.457885623 +0000 UTC m=+0.155856008 container init f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, io.buildah.version=1.29.0, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., version=9.4, io.openshift.expose-services=, managed_by=edpm_ansible, container_name=kepler, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, config_id=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 06 15:32:36 compute-0 podman[224718]: 2026-01-06 15:32:36.483401549 +0000 UTC m=+0.181371654 container start f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, architecture=x86_64, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, release-0.7.12=, com.redhat.component=ubi9-container, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 06 15:32:36 compute-0 kepler[224733]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 06 15:32:36 compute-0 podman[224718]: kepler
Jan 06 15:32:36 compute-0 systemd[1]: Started kepler container.
Jan 06 15:32:36 compute-0 kepler[224733]: I0106 15:32:36.500485       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Jan 06 15:32:36 compute-0 kepler[224733]: I0106 15:32:36.501251       1 config.go:293] using gCgroup ID in the BPF program: true
Jan 06 15:32:36 compute-0 kepler[224733]: I0106 15:32:36.501269       1 config.go:295] kernel version: 5.14
Jan 06 15:32:36 compute-0 kepler[224733]: I0106 15:32:36.501920       1 power.go:78] Unable to obtain power, use estimate method
Jan 06 15:32:36 compute-0 kepler[224733]: I0106 15:32:36.501943       1 redfish.go:169] failed to get redfish credential file path
Jan 06 15:32:36 compute-0 kepler[224733]: I0106 15:32:36.502386       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Jan 06 15:32:36 compute-0 kepler[224733]: I0106 15:32:36.502402       1 power.go:79] using none to obtain power
Jan 06 15:32:36 compute-0 kepler[224733]: E0106 15:32:36.502417       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Jan 06 15:32:36 compute-0 kepler[224733]: E0106 15:32:36.502439       1 exporter.go:154] failed to init GPU accelerators: no devices found
Jan 06 15:32:36 compute-0 kepler[224733]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 06 15:32:36 compute-0 kepler[224733]: I0106 15:32:36.505077       1 exporter.go:84] Number of CPUs: 8
Jan 06 15:32:36 compute-0 sudo[224655]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:36 compute-0 podman[224738]: 2026-01-06 15:32:36.585262866 +0000 UTC m=+0.087301913 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_id=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vendor=Red Hat, Inc., name=ubi9, vcs-type=git)
Jan 06 15:32:36 compute-0 systemd[1]: f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca-719ccb7cc3296c28.service: Main process exited, code=exited, status=1/FAILURE
Jan 06 15:32:36 compute-0 systemd[1]: f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca-719ccb7cc3296c28.service: Failed with result 'exit-code'.
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.074381       1 watcher.go:83] Using in cluster k8s config
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.074481       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Jan 06 15:32:37 compute-0 kepler[224733]: E0106 15:32:37.074821       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.082343       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.082418       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.089996       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.090059       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.104658       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.104719       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.104742       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.117695       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.117752       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.117761       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.117770       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.117780       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.117797       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.123373       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.128238       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.132309       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.133057       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.135441       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Jan 06 15:32:37 compute-0 kepler[224733]: I0106 15:32:37.137098       1 exporter.go:208] Started Kepler in 636.899556ms
Jan 06 15:32:37 compute-0 python3.9[224928]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 06 15:32:38 compute-0 sudo[225078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlnnqskymqwczrgtqynfoxjwfhdptaxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713557.889638-618-214211856295616/AnsiballZ_stat.py'
Jan 06 15:32:38 compute-0 sudo[225078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:38 compute-0 python3.9[225080]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:32:38 compute-0 sudo[225078]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:39 compute-0 sudo[225203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fucxsjxberxssobowhmwyqbbyvmhnofi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713557.889638-618-214211856295616/AnsiballZ_copy.py'
Jan 06 15:32:39 compute-0 sudo[225203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:39 compute-0 python3.9[225205]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713557.889638-618-214211856295616/.source.yaml _original_basename=.8g98o4t2 follow=False checksum=c111718d9c8ee0d6ac68819fb8e18a591a7f547c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:32:39 compute-0 sudo[225203]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:40 compute-0 sudo[225355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdgyzqgjpvsirgyfinmgwzlrzuokmftn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713559.8175285-633-225968945994078/AnsiballZ_systemd.py'
Jan 06 15:32:40 compute-0 sudo[225355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:40 compute-0 python3.9[225357]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 15:32:40 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Jan 06 15:32:40 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:40.825 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Jan 06 15:32:40 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:40.928 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Jan 06 15:32:40 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:40.929 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Jan 06 15:32:40 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:40.929 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Jan 06 15:32:40 compute-0 ceilometer_agent_ipmi[221947]: 2026-01-06 15:32:40.944 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Jan 06 15:32:41 compute-0 systemd[1]: libpod-2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04.scope: Deactivated successfully.
Jan 06 15:32:41 compute-0 systemd[1]: libpod-2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04.scope: Consumed 2.324s CPU time.
Jan 06 15:32:41 compute-0 podman[225361]: 2026-01-06 15:32:41.152494276 +0000 UTC m=+0.386541777 container died 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:32:41 compute-0 systemd[1]: 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04-69bc0174b93b95e5.timer: Deactivated successfully.
Jan 06 15:32:41 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04.
Jan 06 15:32:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04-userdata-shm.mount: Deactivated successfully.
Jan 06 15:32:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-08cfe3b9321bb9c82b468bd2ce6ddc12d6d11514381229187e74af412fab1001-merged.mount: Deactivated successfully.
Jan 06 15:32:41 compute-0 podman[225361]: 2026-01-06 15:32:41.726292911 +0000 UTC m=+0.960340382 container cleanup 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:32:41 compute-0 podman[225361]: ceilometer_agent_ipmi
Jan 06 15:32:41 compute-0 podman[225387]: ceilometer_agent_ipmi
Jan 06 15:32:41 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Jan 06 15:32:41 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Jan 06 15:32:41 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Jan 06 15:32:41 compute-0 systemd[1]: Started libcrun container.
Jan 06 15:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cfe3b9321bb9c82b468bd2ce6ddc12d6d11514381229187e74af412fab1001/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 06 15:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cfe3b9321bb9c82b468bd2ce6ddc12d6d11514381229187e74af412fab1001/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 06 15:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cfe3b9321bb9c82b468bd2ce6ddc12d6d11514381229187e74af412fab1001/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 06 15:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cfe3b9321bb9c82b468bd2ce6ddc12d6d11514381229187e74af412fab1001/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 06 15:32:42 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04.
Jan 06 15:32:42 compute-0 podman[225399]: 2026-01-06 15:32:42.055717735 +0000 UTC m=+0.198759625 container init 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: + sudo -E kolla_set_configs
Jan 06 15:32:42 compute-0 podman[225399]: 2026-01-06 15:32:42.103282694 +0000 UTC m=+0.246324564 container start 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 06 15:32:42 compute-0 podman[225399]: ceilometer_agent_ipmi
Jan 06 15:32:42 compute-0 sudo[225419]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 06 15:32:42 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Jan 06 15:32:42 compute-0 sudo[225419]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 06 15:32:42 compute-0 sudo[225419]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: INFO:__main__:Validating config file
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: INFO:__main__:Copying service configuration files
Jan 06 15:32:42 compute-0 sudo[225355]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: INFO:__main__:Writing out command to execute
Jan 06 15:32:42 compute-0 sudo[225419]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:42 compute-0 podman[225420]: 2026-01-06 15:32:42.187431663 +0000 UTC m=+0.077696209 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: ++ cat /run_command
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: + ARGS=
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: + sudo kolla_copy_cacerts
Jan 06 15:32:42 compute-0 systemd[1]: 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04-4f7a0f54f483cb9c.service: Main process exited, code=exited, status=1/FAILURE
Jan 06 15:32:42 compute-0 systemd[1]: 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04-4f7a0f54f483cb9c.service: Failed with result 'exit-code'.
Jan 06 15:32:42 compute-0 sudo[225441]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 06 15:32:42 compute-0 sudo[225441]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 06 15:32:42 compute-0 sudo[225441]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 06 15:32:42 compute-0 sudo[225441]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: + [[ ! -n '' ]]
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: + . kolla_extend_start
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: + umask 0022
Jan 06 15:32:42 compute-0 ceilometer_agent_ipmi[225413]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Jan 06 15:32:42 compute-0 sudo[225604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwoihggevktiriaoneezucamsxpeccjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713562.4442317-641-51571425317429/AnsiballZ_systemd.py'
Jan 06 15:32:42 compute-0 sudo[225604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:42 compute-0 podman[225567]: 2026-01-06 15:32:42.931752994 +0000 UTC m=+0.071529775 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.226 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.227 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.227 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.227 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.228 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.228 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.228 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.228 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.229 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.229 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.230 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.230 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.230 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.231 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.231 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.231 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.231 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.232 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.232 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.232 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.232 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.233 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.233 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.233 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.234 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.234 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.234 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.234 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.235 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.235 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.235 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.235 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.235 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.236 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.236 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.236 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.236 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.237 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.237 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.237 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.237 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.238 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.238 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.238 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.238 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.239 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.239 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.239 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.240 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.240 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.240 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.240 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.241 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.241 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.241 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.241 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.242 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.242 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.242 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.242 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.243 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.243 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.243 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.243 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.244 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.244 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.244 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.245 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.245 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.245 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.245 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.246 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.246 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.246 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.246 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.247 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.247 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.247 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.247 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.248 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.248 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.248 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.248 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.249 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.249 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.249 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.249 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.249 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.250 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.250 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.250 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.251 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.251 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.251 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.251 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.252 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.252 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.252 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.252 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.253 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.253 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.253 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.254 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.254 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.254 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.254 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.254 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.254 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.254 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.255 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 python3.9[225612]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.255 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.255 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.255 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.255 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.255 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.255 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.255 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.255 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.256 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.256 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.256 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.256 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.256 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.256 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.256 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.256 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.257 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.257 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.257 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.257 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.257 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.257 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.257 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.257 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.257 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.258 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.258 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.258 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.258 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.258 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.258 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.258 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.258 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.258 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.259 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.259 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.259 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.259 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.259 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.259 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.259 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.259 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.259 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.260 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.260 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.260 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.260 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.260 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.260 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.260 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.260 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.260 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.261 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.261 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.261 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.301 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.303 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.305 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 06 15:32:43 compute-0 systemd[1]: Stopping kepler container...
Jan 06 15:32:43 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.330 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpi3nmwnky/privsep.sock']
Jan 06 15:32:43 compute-0 sudo[225628]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpi3nmwnky/privsep.sock
Jan 06 15:32:43 compute-0 sudo[225628]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 06 15:32:43 compute-0 sudo[225628]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 06 15:32:43 compute-0 kepler[224733]: I0106 15:32:43.423422       1 exporter.go:218] Received shutdown signal
Jan 06 15:32:43 compute-0 kepler[224733]: I0106 15:32:43.424954       1 exporter.go:226] Exiting...
Jan 06 15:32:43 compute-0 systemd[1]: libpod-f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca.scope: Deactivated successfully.
Jan 06 15:32:43 compute-0 systemd[1]: libpod-f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca.scope: Consumed 1.020s CPU time.
Jan 06 15:32:43 compute-0 conmon[224733]: conmon f36727e67c3e891afbef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca.scope/container/memory.events
Jan 06 15:32:43 compute-0 podman[225626]: 2026-01-06 15:32:43.624922061 +0000 UTC m=+0.281204428 container stop f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, vendor=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, managed_by=edpm_ansible, release=1214.1726694543, version=9.4, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler)
Jan 06 15:32:43 compute-0 podman[225626]: 2026-01-06 15:32:43.630383776 +0000 UTC m=+0.286666193 container died f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-type=git, vendor=Red Hat, Inc., version=9.4, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, managed_by=edpm_ansible, name=ubi9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, build-date=2024-09-18T21:23:30, config_id=kepler, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 06 15:32:44 compute-0 sudo[225628]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.043 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.043 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpi3nmwnky/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.926 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.933 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.937 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:43.937 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.183 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.183 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.185 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.186 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.186 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.186 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.186 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.186 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.187 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.187 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.187 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.187 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.188 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.193 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.194 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.194 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.194 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.194 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.194 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.195 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.195 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.195 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.195 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.195 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.196 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.196 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.196 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.196 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.197 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.197 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.197 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.198 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.198 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.198 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.198 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.198 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.199 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.199 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.199 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.199 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.199 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.200 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.200 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.200 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.200 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.200 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.200 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.201 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.201 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.201 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.201 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.201 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.202 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.202 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.202 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.202 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.202 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.203 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.203 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.203 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.203 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.203 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.204 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.204 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.204 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.204 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.204 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.205 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.205 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.205 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.205 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.205 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.206 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.206 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.206 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.206 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.206 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.207 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.207 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.207 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.207 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.207 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.208 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.208 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.208 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.208 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.208 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.209 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.209 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.209 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.209 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.209 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.210 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.210 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.210 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.210 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.210 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.210 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.211 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.211 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.211 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.211 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.211 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.212 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.212 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.212 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.212 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.213 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.213 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.213 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.213 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.213 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.214 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.214 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.214 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.214 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.214 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.215 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.215 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.215 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.215 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.215 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.216 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.216 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.216 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.216 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.216 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.217 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.217 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.217 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.217 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.217 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.218 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.218 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.218 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.218 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.218 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.219 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.219 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.219 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.219 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.220 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.220 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.220 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.220 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.220 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.221 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.221 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.221 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.221 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.221 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.221 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.222 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.222 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.222 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.222 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.222 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.223 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.223 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.223 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.223 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.223 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.224 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.224 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.224 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.224 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.224 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.224 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.225 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.225 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.225 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.225 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.225 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.226 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.226 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.226 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.226 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.226 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.227 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.227 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.227 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.227 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.227 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.227 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.228 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.228 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.228 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.228 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.228 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.229 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.229 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.229 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.229 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.229 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.230 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.230 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.230 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.230 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.231 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.231 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.231 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.231 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.231 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.231 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.232 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.232 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.232 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.232 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.232 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.233 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.233 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.233 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.233 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.233 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.233 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.234 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Jan 06 15:32:44 compute-0 ceilometer_agent_ipmi[225413]: 2026-01-06 15:32:44.239 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Jan 06 15:32:44 compute-0 systemd[1]: f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca-719ccb7cc3296c28.timer: Deactivated successfully.
Jan 06 15:32:44 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca.
Jan 06 15:32:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca-userdata-shm.mount: Deactivated successfully.
Jan 06 15:32:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-87bd23ef1f88e036ef46839a6d986ef1a5173b36835bbaa3cc0e4dd59793f7f8-merged.mount: Deactivated successfully.
Jan 06 15:32:44 compute-0 podman[225626]: 2026-01-06 15:32:44.537684763 +0000 UTC m=+1.193967170 container cleanup f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, version=9.4, com.redhat.component=ubi9-container, io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=kepler, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git)
Jan 06 15:32:44 compute-0 podman[225626]: kepler
Jan 06 15:32:44 compute-0 podman[225663]: kepler
Jan 06 15:32:44 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Jan 06 15:32:44 compute-0 systemd[1]: Stopped kepler container.
Jan 06 15:32:44 compute-0 systemd[1]: Starting kepler container...
Jan 06 15:32:44 compute-0 systemd[1]: Started libcrun container.
Jan 06 15:32:44 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca.
Jan 06 15:32:44 compute-0 podman[225676]: 2026-01-06 15:32:44.894296487 +0000 UTC m=+0.225499793 container init f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, managed_by=edpm_ansible, config_id=kepler, distribution-scope=public, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, vcs-type=git, vendor=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., name=ubi9)
Jan 06 15:32:44 compute-0 kepler[225691]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 06 15:32:44 compute-0 podman[225676]: 2026-01-06 15:32:44.934762059 +0000 UTC m=+0.265965375 container start f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, build-date=2024-09-18T21:23:30, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, io.openshift.expose-services=, distribution-scope=public, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.openshift.tags=base rhel9)
Jan 06 15:32:44 compute-0 kepler[225691]: I0106 15:32:44.937478       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Jan 06 15:32:44 compute-0 kepler[225691]: I0106 15:32:44.937696       1 config.go:293] using gCgroup ID in the BPF program: true
Jan 06 15:32:44 compute-0 kepler[225691]: I0106 15:32:44.937721       1 config.go:295] kernel version: 5.14
Jan 06 15:32:44 compute-0 kepler[225691]: I0106 15:32:44.938462       1 power.go:78] Unable to obtain power, use estimate method
Jan 06 15:32:44 compute-0 kepler[225691]: I0106 15:32:44.938506       1 redfish.go:169] failed to get redfish credential file path
Jan 06 15:32:44 compute-0 kepler[225691]: I0106 15:32:44.939224       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Jan 06 15:32:44 compute-0 kepler[225691]: I0106 15:32:44.939246       1 power.go:79] using none to obtain power
Jan 06 15:32:44 compute-0 kepler[225691]: E0106 15:32:44.939274       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Jan 06 15:32:44 compute-0 kepler[225691]: E0106 15:32:44.939307       1 exporter.go:154] failed to init GPU accelerators: no devices found
Jan 06 15:32:44 compute-0 podman[225676]: kepler
Jan 06 15:32:44 compute-0 kepler[225691]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 06 15:32:44 compute-0 kepler[225691]: I0106 15:32:44.942403       1 exporter.go:84] Number of CPUs: 8
Jan 06 15:32:44 compute-0 systemd[1]: Started kepler container.
Jan 06 15:32:44 compute-0 sudo[225604]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:45 compute-0 podman[225701]: 2026-01-06 15:32:45.029574349 +0000 UTC m=+0.082929327 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, container_name=kepler, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Jan 06 15:32:45 compute-0 systemd[1]: f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca-22005a8b0c1a0d33.service: Main process exited, code=exited, status=1/FAILURE
Jan 06 15:32:45 compute-0 systemd[1]: f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca-22005a8b0c1a0d33.service: Failed with result 'exit-code'.
Jan 06 15:32:45 compute-0 sudo[225872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mandfxwaqnftoknjokrnklypfkhxxpji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713565.1689446-649-96669499267249/AnsiballZ_find.py'
Jan 06 15:32:45 compute-0 sudo[225872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.513496       1 watcher.go:83] Using in cluster k8s config
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.513543       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Jan 06 15:32:45 compute-0 kepler[225691]: E0106 15:32:45.513629       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.521248       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.521290       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.529311       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.529357       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.539288       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.539330       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.539345       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.553427       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.553484       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.553493       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.553502       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.553512       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.553528       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.553642       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.553685       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.553716       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.553745       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.553948       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Jan 06 15:32:45 compute-0 kepler[225691]: I0106 15:32:45.554532       1 exporter.go:208] Started Kepler in 617.482421ms
Jan 06 15:32:45 compute-0 python3.9[225874]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 06 15:32:45 compute-0 sudo[225872]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:46 compute-0 sudo[226034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqsbieclwfnkgbzhfbiilhmaijpprpzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713566.2057579-659-14226602387637/AnsiballZ_podman_container_info.py'
Jan 06 15:32:46 compute-0 sudo[226034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:46 compute-0 python3.9[226036]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Jan 06 15:32:47 compute-0 sudo[226034]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:47 compute-0 sudo[226199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqgqunyjygxpmbwixzpgjidluuiznngg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713567.297568-667-111285275255424/AnsiballZ_podman_container_exec.py'
Jan 06 15:32:47 compute-0 sudo[226199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:48 compute-0 python3.9[226201]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:32:48 compute-0 systemd[1]: Started libpod-conmon-79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2.scope.
Jan 06 15:32:48 compute-0 podman[226202]: 2026-01-06 15:32:48.340958301 +0000 UTC m=+0.143759648 container exec 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 06 15:32:48 compute-0 podman[226202]: 2026-01-06 15:32:48.352955369 +0000 UTC m=+0.155756716 container exec_died 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 06 15:32:48 compute-0 sudo[226199]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:48 compute-0 systemd[1]: libpod-conmon-79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2.scope: Deactivated successfully.
Jan 06 15:32:49 compute-0 sudo[226381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obxqpncaipxlzsdxnbasvvdamstkzqye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713568.6713185-675-41607609719725/AnsiballZ_podman_container_exec.py'
Jan 06 15:32:49 compute-0 sudo[226381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:49 compute-0 python3.9[226383]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:32:49 compute-0 systemd[1]: Started libpod-conmon-79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2.scope.
Jan 06 15:32:49 compute-0 podman[226384]: 2026-01-06 15:32:49.573522811 +0000 UTC m=+0.137106172 container exec 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 06 15:32:49 compute-0 podman[226384]: 2026-01-06 15:32:49.606887955 +0000 UTC m=+0.170471266 container exec_died 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 06 15:32:49 compute-0 sudo[226381]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:49 compute-0 systemd[1]: libpod-conmon-79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2.scope: Deactivated successfully.
Jan 06 15:32:50 compute-0 sudo[226565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwfdroortuayjpfhoigspojnnsovwabp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713569.8902051-683-198782135658617/AnsiballZ_file.py'
Jan 06 15:32:50 compute-0 sudo[226565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:50 compute-0 python3.9[226567]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:32:50 compute-0 sudo[226565]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:51 compute-0 sudo[226717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnetfqhyzraaxqhimjgridlnupypkioj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713571.0131814-692-30175302863900/AnsiballZ_podman_container_info.py'
Jan 06 15:32:51 compute-0 sudo[226717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:51 compute-0 python3.9[226719]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Jan 06 15:32:51 compute-0 sudo[226717]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:51 compute-0 podman[226726]: 2026-01-06 15:32:51.873006756 +0000 UTC m=+0.123028959 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 15:32:52 compute-0 podman[226756]: 2026-01-06 15:32:52.076017533 +0000 UTC m=+0.156560947 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 06 15:32:52 compute-0 sudo[226928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfykjgcpirvknywbhpgufjvqbpasoqwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713572.1184502-700-220745870962172/AnsiballZ_podman_container_exec.py'
Jan 06 15:32:52 compute-0 sudo[226928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:52 compute-0 python3.9[226930]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:32:52 compute-0 systemd[1]: Started libpod-conmon-7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487.scope.
Jan 06 15:32:52 compute-0 podman[226931]: 2026-01-06 15:32:52.96863761 +0000 UTC m=+0.142934626 container exec 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 06 15:32:53 compute-0 podman[226931]: 2026-01-06 15:32:53.004816238 +0000 UTC m=+0.179113224 container exec_died 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 06 15:32:53 compute-0 systemd[1]: libpod-conmon-7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487.scope: Deactivated successfully.
Jan 06 15:32:53 compute-0 sudo[226928]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:32:53.670 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:32:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:32:53.671 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:32:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:32:53.672 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:32:53 compute-0 sudo[227109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxbhtyvwxmeyhosdlwqayggpypyzhoig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713573.3541143-708-199197681884753/AnsiballZ_podman_container_exec.py'
Jan 06 15:32:53 compute-0 sudo[227109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:54 compute-0 python3.9[227111]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:32:54 compute-0 systemd[1]: Started libpod-conmon-7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487.scope.
Jan 06 15:32:54 compute-0 podman[227112]: 2026-01-06 15:32:54.243250035 +0000 UTC m=+0.146539272 container exec 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 06 15:32:54 compute-0 podman[227112]: 2026-01-06 15:32:54.28045183 +0000 UTC m=+0.183741077 container exec_died 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Jan 06 15:32:54 compute-0 sudo[227109]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:54 compute-0 systemd[1]: libpod-conmon-7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487.scope: Deactivated successfully.
Jan 06 15:32:55 compute-0 sudo[227295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljstyqbydepwkxgjenarrslwhswghoda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713574.5905018-716-216163689168566/AnsiballZ_file.py'
Jan 06 15:32:55 compute-0 sudo[227295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:55 compute-0 python3.9[227297]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:32:55 compute-0 sudo[227295]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:56 compute-0 sudo[227447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oodplgvqmtozpzdzcczwlhrogcxaoikl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713575.7327542-725-101570143562127/AnsiballZ_podman_container_info.py'
Jan 06 15:32:56 compute-0 sudo[227447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:56 compute-0 python3.9[227449]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Jan 06 15:32:56 compute-0 sudo[227447]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:57 compute-0 sudo[227612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyesmhtgdpdjfvarwfhjstzsxycvkcdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713576.9585922-733-81282554948191/AnsiballZ_podman_container_exec.py'
Jan 06 15:32:57 compute-0 sudo[227612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:57 compute-0 python3.9[227614]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:32:57 compute-0 systemd[1]: Started libpod-conmon-3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32.scope.
Jan 06 15:32:57 compute-0 podman[227615]: 2026-01-06 15:32:57.838499744 +0000 UTC m=+0.171103392 container exec 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4)
Jan 06 15:32:57 compute-0 podman[227615]: 2026-01-06 15:32:57.872609967 +0000 UTC m=+0.205213615 container exec_died 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 06 15:32:57 compute-0 systemd[1]: libpod-conmon-3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32.scope: Deactivated successfully.
Jan 06 15:32:57 compute-0 sudo[227612]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:58 compute-0 sudo[227795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjhkswppeqxqfoypksavajbleznavwqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713578.1553342-741-78292200608859/AnsiballZ_podman_container_exec.py'
Jan 06 15:32:58 compute-0 sudo[227795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:32:58 compute-0 python3.9[227797]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:32:59 compute-0 systemd[1]: Started libpod-conmon-3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32.scope.
Jan 06 15:32:59 compute-0 podman[227798]: 2026-01-06 15:32:59.10485456 +0000 UTC m=+0.145175736 container exec 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251224, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:32:59 compute-0 podman[227798]: 2026-01-06 15:32:59.140887904 +0000 UTC m=+0.181209020 container exec_died 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Jan 06 15:32:59 compute-0 systemd[1]: libpod-conmon-3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32.scope: Deactivated successfully.
Jan 06 15:32:59 compute-0 sudo[227795]: pam_unix(sudo:session): session closed for user root
Jan 06 15:32:59 compute-0 podman[201918]: time="2026-01-06T15:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:32:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27277 "" "Go-http-client/1.1"
Jan 06 15:32:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3864 "" "Go-http-client/1.1"
Jan 06 15:32:59 compute-0 sudo[227978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezauskcnyffoknnjpstwzweonbbtjmjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713579.5230825-749-77300618720246/AnsiballZ_file.py'
Jan 06 15:32:59 compute-0 sudo[227978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:00 compute-0 python3.9[227980]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:33:00 compute-0 sudo[227978]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:01 compute-0 sudo[228130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whwpbiexrqgoojvdgurtglkydmifkgsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713580.6766982-758-130741654105002/AnsiballZ_podman_container_info.py'
Jan 06 15:33:01 compute-0 sudo[228130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:01 compute-0 python3.9[228132]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Jan 06 15:33:01 compute-0 openstack_network_exporter[205258]: ERROR   15:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:33:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:33:01 compute-0 openstack_network_exporter[205258]: ERROR   15:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:33:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:33:01 compute-0 sudo[228130]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:02 compute-0 sudo[228310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwogzmxnturccomnlkftorxepgqzcxxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713581.900632-766-273162223568527/AnsiballZ_podman_container_exec.py'
Jan 06 15:33:02 compute-0 sudo[228310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:02 compute-0 podman[228268]: 2026-01-06 15:33:02.46978862 +0000 UTC m=+0.139175127 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:33:02 compute-0 python3.9[228314]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:33:02 compute-0 systemd[1]: Started libpod-conmon-97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e.scope.
Jan 06 15:33:02 compute-0 podman[228315]: 2026-01-06 15:33:02.821663178 +0000 UTC m=+0.133807504 container exec 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 15:33:02 compute-0 podman[228315]: 2026-01-06 15:33:02.856156741 +0000 UTC m=+0.168300998 container exec_died 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 15:33:02 compute-0 systemd[1]: libpod-conmon-97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e.scope: Deactivated successfully.
Jan 06 15:33:02 compute-0 sudo[228310]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:03 compute-0 sudo[228494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eybcfufmehosvwdgoyxxjjmbjtcztymh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713583.173619-774-8276857803812/AnsiballZ_podman_container_exec.py'
Jan 06 15:33:03 compute-0 sudo[228494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:03 compute-0 podman[228497]: 2026-01-06 15:33:03.879858436 +0000 UTC m=+0.141296452 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2)
Jan 06 15:33:03 compute-0 python3.9[228496]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:33:04 compute-0 systemd[1]: Started libpod-conmon-97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e.scope.
Jan 06 15:33:04 compute-0 podman[228517]: 2026-01-06 15:33:04.086176654 +0000 UTC m=+0.159422381 container exec 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 15:33:04 compute-0 podman[228517]: 2026-01-06 15:33:04.119489813 +0000 UTC m=+0.192735550 container exec_died 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 15:33:04 compute-0 systemd[1]: libpod-conmon-97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e.scope: Deactivated successfully.
Jan 06 15:33:04 compute-0 sudo[228494]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:05 compute-0 sudo[228696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lekamxhpvdqiirihwpmmdejofcnawrxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713584.508778-782-259650060766763/AnsiballZ_file.py'
Jan 06 15:33:05 compute-0 sudo[228696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:05 compute-0 python3.9[228698]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:33:05 compute-0 sudo[228696]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:06 compute-0 sudo[228865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrzirepkrtembhrznbxhrfsywltlhmsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713585.7570164-791-281262525307714/AnsiballZ_podman_container_info.py'
Jan 06 15:33:06 compute-0 sudo[228865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:06 compute-0 podman[228822]: 2026-01-06 15:33:06.377689773 +0000 UTC m=+0.148615596 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6)
Jan 06 15:33:06 compute-0 python3.9[228871]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Jan 06 15:33:06 compute-0 sudo[228865]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:07 compute-0 sudo[229033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajvwojounjljwknyfmesmgcyzwbfjssp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713586.9361415-799-222647064699380/AnsiballZ_podman_container_exec.py'
Jan 06 15:33:07 compute-0 sudo[229033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:07 compute-0 python3.9[229035]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:33:07 compute-0 systemd[1]: Started libpod-conmon-935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f.scope.
Jan 06 15:33:07 compute-0 podman[229036]: 2026-01-06 15:33:07.78791745 +0000 UTC m=+0.170705509 container exec 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 15:33:07 compute-0 podman[229036]: 2026-01-06 15:33:07.820092129 +0000 UTC m=+0.202880168 container exec_died 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 15:33:07 compute-0 sudo[229033]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:07 compute-0 systemd[1]: libpod-conmon-935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f.scope: Deactivated successfully.
Jan 06 15:33:08 compute-0 sudo[229215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvbqirqtdlhoobobffdtnweuzodwzldu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713588.210607-807-190870396972633/AnsiballZ_podman_container_exec.py'
Jan 06 15:33:08 compute-0 sudo[229215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:08 compute-0 python3.9[229217]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:33:09 compute-0 systemd[1]: Started libpod-conmon-935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f.scope.
Jan 06 15:33:09 compute-0 podman[229218]: 2026-01-06 15:33:09.151980408 +0000 UTC m=+0.122780663 container exec 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 06 15:33:09 compute-0 podman[229218]: 2026-01-06 15:33:09.186655314 +0000 UTC m=+0.157455519 container exec_died 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 06 15:33:09 compute-0 sudo[229215]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:09 compute-0 systemd[1]: libpod-conmon-935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f.scope: Deactivated successfully.
Jan 06 15:33:10 compute-0 sudo[229399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aavgqpbwairydieconfsxvhjasxwxcnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713589.5682957-815-227073836973864/AnsiballZ_file.py'
Jan 06 15:33:10 compute-0 sudo[229399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:10 compute-0 python3.9[229401]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:33:10 compute-0 sudo[229399]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:11 compute-0 sudo[229551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdeoarzczhjgmwgwdbcesrapqrdotnud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713590.5872061-824-206298343075605/AnsiballZ_podman_container_info.py'
Jan 06 15:33:11 compute-0 sudo[229551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:11 compute-0 python3.9[229553]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Jan 06 15:33:11 compute-0 sudo[229551]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:12 compute-0 sudo[229714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhkdosxongvznnortmxjeokerhbtudfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713591.6962545-832-237316001962529/AnsiballZ_podman_container_exec.py'
Jan 06 15:33:12 compute-0 sudo[229714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:12 compute-0 python3.9[229716]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:33:12 compute-0 systemd[1]: Started libpod-conmon-6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4.scope.
Jan 06 15:33:12 compute-0 podman[229717]: 2026-01-06 15:33:12.594998902 +0000 UTC m=+0.151519802 container exec 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.buildah.version=1.33.7)
Jan 06 15:33:12 compute-0 podman[229717]: 2026-01-06 15:33:12.608355495 +0000 UTC m=+0.164876325 container exec_died 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, distribution-scope=public)
Jan 06 15:33:12 compute-0 sudo[229714]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:12 compute-0 systemd[1]: libpod-conmon-6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4.scope: Deactivated successfully.
Jan 06 15:33:12 compute-0 podman[229731]: 2026-01-06 15:33:12.733301824 +0000 UTC m=+0.128224396 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 06 15:33:12 compute-0 systemd[1]: 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04-4f7a0f54f483cb9c.service: Main process exited, code=exited, status=1/FAILURE
Jan 06 15:33:12 compute-0 systemd[1]: 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04-4f7a0f54f483cb9c.service: Failed with result 'exit-code'.
Jan 06 15:33:13 compute-0 sudo[229925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rojmfjfbdkljbhwdtipgjwkjbzbczltd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713592.9908538-840-279541802612664/AnsiballZ_podman_container_exec.py'
Jan 06 15:33:13 compute-0 sudo[229925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:13 compute-0 podman[229887]: 2026-01-06 15:33:13.541030093 +0000 UTC m=+0.142084003 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 15:33:13 compute-0 python3.9[229938]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:33:13 compute-0 systemd[1]: Started libpod-conmon-6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4.scope.
Jan 06 15:33:13 compute-0 podman[229941]: 2026-01-06 15:33:13.892497083 +0000 UTC m=+0.139105584 container exec 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, release=1755695350, com.redhat.component=ubi9-minimal-container, version=9.6, config_id=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 06 15:33:13 compute-0 podman[229941]: 2026-01-06 15:33:13.928038481 +0000 UTC m=+0.174646932 container exec_died 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 15:33:13 compute-0 systemd[1]: libpod-conmon-6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4.scope: Deactivated successfully.
Jan 06 15:33:13 compute-0 sudo[229925]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:14 compute-0 sudo[230119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqeugskwhrqyhdmvkfiebaojzfamkxyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713594.3405774-848-161658933549855/AnsiballZ_file.py'
Jan 06 15:33:14 compute-0 sudo[230119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:15 compute-0 python3.9[230121]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:33:15 compute-0 sudo[230119]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:15 compute-0 sudo[230289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcvnnncoldwlhjywjdlcphfgvoptssyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713595.3932743-857-46295823681580/AnsiballZ_podman_container_info.py'
Jan 06 15:33:15 compute-0 sudo[230289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:15 compute-0 podman[230244]: 2026-01-06 15:33:15.900860085 +0000 UTC m=+0.158163368 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, container_name=kepler, name=ubi9, release-0.7.12=, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 06 15:33:16 compute-0 python3.9[230292]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Jan 06 15:33:16 compute-0 sudo[230289]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:17 compute-0 sudo[230454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsfayucpiwvgnukoktjutztsnxshkkmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713596.5403826-865-144390413598935/AnsiballZ_podman_container_exec.py'
Jan 06 15:33:17 compute-0 sudo[230454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:17 compute-0 python3.9[230456]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:33:17 compute-0 systemd[1]: Started libpod-conmon-2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04.scope.
Jan 06 15:33:17 compute-0 podman[230457]: 2026-01-06 15:33:17.796692864 +0000 UTC m=+0.156032941 container exec 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 06 15:33:17 compute-0 podman[230457]: 2026-01-06 15:33:17.830186068 +0000 UTC m=+0.189526115 container exec_died 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 06 15:33:17 compute-0 systemd[1]: libpod-conmon-2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04.scope: Deactivated successfully.
Jan 06 15:33:17 compute-0 sudo[230454]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:18 compute-0 sudo[230639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bivfezmrwqqjgwnwszsmjcinkzadfqqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713598.1560633-873-245297044637658/AnsiballZ_podman_container_exec.py'
Jan 06 15:33:18 compute-0 sudo[230639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:18 compute-0 python3.9[230641]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:33:19 compute-0 systemd[1]: Started libpod-conmon-2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04.scope.
Jan 06 15:33:19 compute-0 podman[230642]: 2026-01-06 15:33:19.12278626 +0000 UTC m=+0.150169016 container exec 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:33:19 compute-0 podman[230642]: 2026-01-06 15:33:19.158050521 +0000 UTC m=+0.185433217 container exec_died 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:33:19 compute-0 systemd[1]: libpod-conmon-2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04.scope: Deactivated successfully.
Jan 06 15:33:19 compute-0 sudo[230639]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:20 compute-0 sudo[230820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvxpbywzuapaqjlupfzaxmebhajyzvgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713599.526432-881-226223486546651/AnsiballZ_file.py'
Jan 06 15:33:20 compute-0 sudo[230820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:20 compute-0 python3.9[230822]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:33:20 compute-0 sudo[230820]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:21 compute-0 sudo[230972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piwyxnmrcwswaivzplocjhoalxlymzpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713600.6136582-890-47042150174776/AnsiballZ_podman_container_info.py'
Jan 06 15:33:21 compute-0 sudo[230972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:21 compute-0 python3.9[230974]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Jan 06 15:33:21 compute-0 sudo[230972]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:22 compute-0 sudo[231167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poshhupubxfpacdaebeehsawcnfaatex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713601.7399843-898-178381596370718/AnsiballZ_podman_container_exec.py'
Jan 06 15:33:22 compute-0 sudo[231167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:22 compute-0 podman[231111]: 2026-01-06 15:33:22.289567741 +0000 UTC m=+0.122473845 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 06 15:33:22 compute-0 podman[231110]: 2026-01-06 15:33:22.337667311 +0000 UTC m=+0.169717703 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 06 15:33:22 compute-0 python3.9[231179]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:33:22 compute-0 systemd[1]: Started libpod-conmon-f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca.scope.
Jan 06 15:33:22 compute-0 podman[231190]: 2026-01-06 15:33:22.610536856 +0000 UTC m=+0.135970211 container exec f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, container_name=kepler, release-0.7.12=, name=ubi9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 06 15:33:22 compute-0 podman[231190]: 2026-01-06 15:33:22.643581028 +0000 UTC m=+0.169014383 container exec_died f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, managed_by=edpm_ansible, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4, build-date=2024-09-18T21:23:30, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, release=1214.1726694543)
Jan 06 15:33:22 compute-0 systemd[1]: libpod-conmon-f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca.scope: Deactivated successfully.
Jan 06 15:33:22 compute-0 sudo[231167]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:23 compute-0 sudo[231372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeipvccutxczsmfmptyhmawynaaxstvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713603.0026884-906-214051729384094/AnsiballZ_podman_container_exec.py'
Jan 06 15:33:23 compute-0 sudo[231372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:23 compute-0 python3.9[231374]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 06 15:33:23 compute-0 systemd[1]: Started libpod-conmon-f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca.scope.
Jan 06 15:33:23 compute-0 podman[231375]: 2026-01-06 15:33:23.8888061 +0000 UTC m=+0.135252203 container exec f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, name=ubi9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 15:33:23 compute-0 podman[231375]: 2026-01-06 15:33:23.921574075 +0000 UTC m=+0.168020158 container exec_died f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, release-0.7.12=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=base rhel9, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, com.redhat.component=ubi9-container, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, container_name=kepler)
Jan 06 15:33:23 compute-0 sudo[231372]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:23 compute-0 systemd[1]: libpod-conmon-f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca.scope: Deactivated successfully.
Jan 06 15:33:24 compute-0 sudo[231553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zycgpvnghnlxnvcfktraevcuiuhsutei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713604.2269256-914-95025608122901/AnsiballZ_file.py'
Jan 06 15:33:24 compute-0 sudo[231553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:24 compute-0 python3.9[231555]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:33:24 compute-0 sudo[231553]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:25 compute-0 sudo[231705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjpkjsomzqovarvogqrvnwxylodzinkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713605.3292234-923-248657512600388/AnsiballZ_file.py'
Jan 06 15:33:25 compute-0 sudo[231705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:26 compute-0 python3.9[231707]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:33:26 compute-0 sudo[231705]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:26 compute-0 sudo[231857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvyhrihyvfczkzpznhhhzcdbexdykzqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713606.363484-931-178180735998261/AnsiballZ_stat.py'
Jan 06 15:33:26 compute-0 sudo[231857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:27 compute-0 nova_compute[185513]: 2026-01-06 15:33:27.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:33:27 compute-0 nova_compute[185513]: 2026-01-06 15:33:27.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 06 15:33:27 compute-0 nova_compute[185513]: 2026-01-06 15:33:27.049 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 06 15:33:27 compute-0 nova_compute[185513]: 2026-01-06 15:33:27.049 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:33:27 compute-0 nova_compute[185513]: 2026-01-06 15:33:27.050 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 06 15:33:27 compute-0 nova_compute[185513]: 2026-01-06 15:33:27.072 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:33:27 compute-0 python3.9[231859]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:33:27 compute-0 sudo[231857]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:27 compute-0 sudo[231980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxgyxxhxaozqbmvlsixtcvxmjhpxgpfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713606.363484-931-178180735998261/AnsiballZ_copy.py'
Jan 06 15:33:27 compute-0 sudo[231980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:27 compute-0 python3.9[231982]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1767713606.363484-931-178180735998261/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:33:27 compute-0 sudo[231980]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:28 compute-0 sudo[232132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpxnonfubijkbvnhjmnhqvhxegtbmehl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713608.3478236-947-74903501479072/AnsiballZ_file.py'
Jan 06 15:33:28 compute-0 sudo[232132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:29 compute-0 python3.9[232134]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:33:29 compute-0 sudo[232132]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:29 compute-0 nova_compute[185513]: 2026-01-06 15:33:29.089 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:33:29 compute-0 nova_compute[185513]: 2026-01-06 15:33:29.090 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:33:29 compute-0 podman[201918]: time="2026-01-06T15:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:33:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27276 "" "Go-http-client/1.1"
Jan 06 15:33:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3868 "" "Go-http-client/1.1"
Jan 06 15:33:29 compute-0 sudo[232284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzemgavvgzxpdbfptqsainiaaulgqtta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713609.3472688-955-179192285103521/AnsiballZ_stat.py'
Jan 06 15:33:29 compute-0 sudo[232284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:30 compute-0 nova_compute[185513]: 2026-01-06 15:33:30.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:33:30 compute-0 nova_compute[185513]: 2026-01-06 15:33:30.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:33:30 compute-0 nova_compute[185513]: 2026-01-06 15:33:30.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:33:30 compute-0 nova_compute[185513]: 2026-01-06 15:33:30.045 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:33:30 compute-0 nova_compute[185513]: 2026-01-06 15:33:30.046 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:33:30 compute-0 python3.9[232286]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:33:30 compute-0 sudo[232284]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:30 compute-0 sudo[232362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dspqovvdwfzdzcpwmtlamzolvaryqdsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713609.3472688-955-179192285103521/AnsiballZ_file.py'
Jan 06 15:33:30 compute-0 sudo[232362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:30 compute-0 python3.9[232364]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:33:30 compute-0 sudo[232362]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:31 compute-0 nova_compute[185513]: 2026-01-06 15:33:31.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:33:31 compute-0 nova_compute[185513]: 2026-01-06 15:33:31.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:33:31 compute-0 nova_compute[185513]: 2026-01-06 15:33:31.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:33:31 compute-0 nova_compute[185513]: 2026-01-06 15:33:31.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:33:31 compute-0 nova_compute[185513]: 2026-01-06 15:33:31.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:33:31 compute-0 nova_compute[185513]: 2026-01-06 15:33:31.072 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:33:31 compute-0 nova_compute[185513]: 2026-01-06 15:33:31.073 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:33:31 compute-0 nova_compute[185513]: 2026-01-06 15:33:31.074 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:33:31 compute-0 nova_compute[185513]: 2026-01-06 15:33:31.075 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:33:31 compute-0 openstack_network_exporter[205258]: ERROR   15:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:33:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:33:31 compute-0 openstack_network_exporter[205258]: ERROR   15:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:33:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:33:31 compute-0 sudo[232514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvysqxfrgtzpgkswkojukqfrllupakzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713611.002881-967-209265232195697/AnsiballZ_stat.py'
Jan 06 15:33:31 compute-0 sudo[232514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:31 compute-0 nova_compute[185513]: 2026-01-06 15:33:31.562 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:33:31 compute-0 nova_compute[185513]: 2026-01-06 15:33:31.564 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5684MB free_disk=72.48097229003906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:33:31 compute-0 nova_compute[185513]: 2026-01-06 15:33:31.564 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:33:31 compute-0 nova_compute[185513]: 2026-01-06 15:33:31.564 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:33:31 compute-0 nova_compute[185513]: 2026-01-06 15:33:31.741 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:33:31 compute-0 nova_compute[185513]: 2026-01-06 15:33:31.741 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:33:31 compute-0 python3.9[232516]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:33:31 compute-0 sudo[232514]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:31 compute-0 nova_compute[185513]: 2026-01-06 15:33:31.854 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing inventories for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 06 15:33:31 compute-0 nova_compute[185513]: 2026-01-06 15:33:31.984 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating ProviderTree inventory for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 06 15:33:31 compute-0 nova_compute[185513]: 2026-01-06 15:33:31.985 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating inventory in ProviderTree for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 06 15:33:32 compute-0 nova_compute[185513]: 2026-01-06 15:33:32.014 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing aggregate associations for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 06 15:33:32 compute-0 nova_compute[185513]: 2026-01-06 15:33:32.054 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing trait associations for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE41,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 06 15:33:32 compute-0 nova_compute[185513]: 2026-01-06 15:33:32.094 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:33:32 compute-0 nova_compute[185513]: 2026-01-06 15:33:32.115 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:33:32 compute-0 nova_compute[185513]: 2026-01-06 15:33:32.118 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:33:32 compute-0 nova_compute[185513]: 2026-01-06 15:33:32.118 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:33:32 compute-0 sudo[232592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eczlhzwiffjsbpgzadiadxtamseyzjjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713611.002881-967-209265232195697/AnsiballZ_file.py'
Jan 06 15:33:32 compute-0 sudo[232592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:32 compute-0 python3.9[232594]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.l2rqzyb9 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:33:32 compute-0 sudo[232592]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:32 compute-0 podman[232648]: 2026-01-06 15:33:32.877777836 +0000 UTC m=+0.134948904 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.066 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.068 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.077 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.078 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.078 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.078 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.082 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.082 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.084 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'power.state': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'power.state': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'power.state': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'power.state': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'power.state': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.087 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.087 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.088 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.088 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.088 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.091 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.091 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.091 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.091 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.091 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:33:33.091 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:33:33 compute-0 nova_compute[185513]: 2026-01-06 15:33:33.118 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:33:33 compute-0 sudo[232765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-solnfquqowxenjxitoppcxcugmqhxzgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713612.665847-979-100809185444932/AnsiballZ_stat.py'
Jan 06 15:33:33 compute-0 sudo[232765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:33 compute-0 python3.9[232767]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:33:33 compute-0 sudo[232765]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:33 compute-0 sudo[232843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtjabpbnpjvsgqskoekryztkqnarwuvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713612.665847-979-100809185444932/AnsiballZ_file.py'
Jan 06 15:33:33 compute-0 sudo[232843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:33 compute-0 python3.9[232845]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:33:34 compute-0 sudo[232843]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:34 compute-0 sudo[233011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozpdzadvterjlavvfjvujqodrvyqosra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713614.2635293-992-23855536153058/AnsiballZ_command.py'
Jan 06 15:33:34 compute-0 sudo[233011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:34 compute-0 podman[232969]: 2026-01-06 15:33:34.836558169 +0000 UTC m=+0.116532628 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251224)
Jan 06 15:33:35 compute-0 python3.9[233016]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:33:35 compute-0 sudo[233011]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:35 compute-0 sudo[233167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gevxgwzbndwumcwrxesqzrzqbgtikofk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767713615.3138704-1000-139644955015594/AnsiballZ_edpm_nftables_from_files.py'
Jan 06 15:33:36 compute-0 sudo[233167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:36 compute-0 python3[233169]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 06 15:33:36 compute-0 sudo[233167]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:36 compute-0 podman[233246]: 2026-01-06 15:33:36.864914658 +0000 UTC m=+0.117271418 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, container_name=openstack_network_exporter, architecture=x86_64, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9)
Jan 06 15:33:37 compute-0 sudo[233337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajgkjvyfyaggkdnsuewmmmqyqqbhewts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713616.5821187-1008-38148444874771/AnsiballZ_stat.py'
Jan 06 15:33:37 compute-0 sudo[233337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:37 compute-0 python3.9[233339]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:33:37 compute-0 sudo[233337]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:37 compute-0 sudo[233415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayukdelksuwzgjxpwhmcbtidyjtscfgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713616.5821187-1008-38148444874771/AnsiballZ_file.py'
Jan 06 15:33:37 compute-0 sudo[233415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:37 compute-0 python3.9[233417]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:33:37 compute-0 sudo[233415]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:38 compute-0 sudo[233567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmcuudgnankxdzfzshngeurjalcvjxpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713618.2491317-1020-273183689172465/AnsiballZ_stat.py'
Jan 06 15:33:38 compute-0 sudo[233567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:38 compute-0 python3.9[233569]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:33:39 compute-0 sudo[233567]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:39 compute-0 sudo[233645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryddwzwkhdadtnqvldcoulmsojbcqaer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713618.2491317-1020-273183689172465/AnsiballZ_file.py'
Jan 06 15:33:39 compute-0 sudo[233645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:39 compute-0 python3.9[233647]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:33:39 compute-0 sudo[233645]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:40 compute-0 sudo[233797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiquwkszcztmzwqdsrqoixozcbhbownq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713619.8862746-1032-205086106277580/AnsiballZ_stat.py'
Jan 06 15:33:40 compute-0 sudo[233797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:40 compute-0 python3.9[233799]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:33:40 compute-0 sudo[233797]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:41 compute-0 sudo[233875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuvxvdiyywngqsvhpaqgwdtdmqhqhezc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713619.8862746-1032-205086106277580/AnsiballZ_file.py'
Jan 06 15:33:41 compute-0 sudo[233875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:41 compute-0 python3.9[233877]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:33:41 compute-0 sudo[233875]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:42 compute-0 sudo[234027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmqbdglesjqjlpupkdorhyblkdthrwnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713621.5896437-1044-205413130514272/AnsiballZ_stat.py'
Jan 06 15:33:42 compute-0 sudo[234027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:42 compute-0 python3.9[234029]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:33:42 compute-0 sudo[234027]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:42 compute-0 sudo[234105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiocureizzjhlowzfengdmhapliqqzln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713621.5896437-1044-205413130514272/AnsiballZ_file.py'
Jan 06 15:33:42 compute-0 sudo[234105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:42 compute-0 podman[234107]: 2026-01-06 15:33:42.996451393 +0000 UTC m=+0.100111854 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:33:43 compute-0 python3.9[234108]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:33:43 compute-0 sudo[234105]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:43 compute-0 podman[234203]: 2026-01-06 15:33:43.853705489 +0000 UTC m=+0.116760614 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 15:33:43 compute-0 sudo[234296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnxiloacwqtmszdwlatphgvzmsekbwpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713623.4308112-1056-233946639472073/AnsiballZ_stat.py'
Jan 06 15:33:44 compute-0 sudo[234296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:44 compute-0 python3.9[234298]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:33:44 compute-0 sudo[234296]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:44 compute-0 sudo[234421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyrdxgxsxysxrjynixbodpvhtuzuqvmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713623.4308112-1056-233946639472073/AnsiballZ_copy.py'
Jan 06 15:33:44 compute-0 sudo[234421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:44 compute-0 python3.9[234423]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767713623.4308112-1056-233946639472073/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:33:44 compute-0 sudo[234421]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:45 compute-0 sudo[234573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkvfeixpiesrnmkrvzmihsqsmijfmmxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713625.1359963-1071-48138588251689/AnsiballZ_file.py'
Jan 06 15:33:45 compute-0 sudo[234573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:45 compute-0 python3.9[234575]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:33:45 compute-0 sudo[234573]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:46 compute-0 podman[234699]: 2026-01-06 15:33:46.510571575 +0000 UTC m=+0.096275404 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-type=git, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, managed_by=edpm_ansible, container_name=kepler, io.buildah.version=1.29.0, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, io.openshift.expose-services=)
Jan 06 15:33:46 compute-0 sudo[234745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhhrczefzixemdlrafymvyoviflgipqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713626.0377986-1079-19596554985367/AnsiballZ_command.py'
Jan 06 15:33:46 compute-0 sudo[234745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:46 compute-0 python3.9[234748]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:33:46 compute-0 sudo[234745]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:47 compute-0 sudo[234901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxezeaojvbolawlawefgxorbhpsfvhqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713627.0078495-1087-219603115052676/AnsiballZ_blockinfile.py'
Jan 06 15:33:47 compute-0 sudo[234901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:47 compute-0 python3.9[234903]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:33:47 compute-0 sudo[234901]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:48 compute-0 sudo[235053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pddlvcmlvbimrvhuwalrvheupybprhab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713628.114795-1096-119290870822416/AnsiballZ_command.py'
Jan 06 15:33:48 compute-0 sudo[235053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:48 compute-0 python3.9[235055]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:33:48 compute-0 sudo[235053]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:49 compute-0 sudo[235207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fczzkjiynmteigyvqhocieoqwtebdxmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713629.0818675-1104-236204227120049/AnsiballZ_stat.py'
Jan 06 15:33:49 compute-0 sudo[235207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:49 compute-0 python3.9[235209]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 06 15:33:49 compute-0 sudo[235207]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:50 compute-0 sudo[235361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwgymiqgnwrgxawntvfuevwfaluonatj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713630.0233066-1112-246377729465053/AnsiballZ_command.py'
Jan 06 15:33:50 compute-0 sudo[235361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:50 compute-0 python3.9[235363]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:33:50 compute-0 sudo[235361]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:51 compute-0 sudo[235516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfyizfupqeiakloeeugpvugqveriibpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713631.0335565-1120-136886793141397/AnsiballZ_file.py'
Jan 06 15:33:51 compute-0 sudo[235516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:33:51 compute-0 python3.9[235518]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:33:51 compute-0 sudo[235516]: pam_unix(sudo:session): session closed for user root
Jan 06 15:33:52 compute-0 sshd-session[214063]: Connection closed by 192.168.122.30 port 47690
Jan 06 15:33:52 compute-0 sshd-session[214060]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:33:52 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Jan 06 15:33:52 compute-0 systemd[1]: session-27.scope: Consumed 1min 51.480s CPU time.
Jan 06 15:33:52 compute-0 systemd-logind[791]: Session 27 logged out. Waiting for processes to exit.
Jan 06 15:33:52 compute-0 systemd-logind[791]: Removed session 27.
Jan 06 15:33:52 compute-0 podman[235544]: 2026-01-06 15:33:52.846365915 +0000 UTC m=+0.105013674 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 06 15:33:52 compute-0 podman[235543]: 2026-01-06 15:33:52.879985782 +0000 UTC m=+0.143119810 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 06 15:33:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:33:53.672 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:33:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:33:53.675 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:33:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:33:53.675 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:33:57 compute-0 sshd-session[235589]: Accepted publickey for zuul from 192.168.122.30 port 37338 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 15:33:57 compute-0 systemd-logind[791]: New session 28 of user zuul.
Jan 06 15:33:57 compute-0 systemd[1]: Started Session 28 of User zuul.
Jan 06 15:33:57 compute-0 sshd-session[235589]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:33:58 compute-0 python3.9[235742]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:33:59 compute-0 podman[201918]: time="2026-01-06T15:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:33:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:33:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3874 "" "Go-http-client/1.1"
Jan 06 15:33:59 compute-0 sudo[235896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luvovozociresoshqdtjemwpxyeqfmfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713639.0570688-29-120822408876970/AnsiballZ_systemd.py'
Jan 06 15:33:59 compute-0 sudo[235896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:34:00 compute-0 python3.9[235898]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Jan 06 15:34:00 compute-0 sudo[235896]: pam_unix(sudo:session): session closed for user root
Jan 06 15:34:00 compute-0 sudo[236049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwlrggiknxficvflsxtrykxpeweqtshd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713640.5229118-37-216948261502006/AnsiballZ_setup.py'
Jan 06 15:34:00 compute-0 sudo[236049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:34:01 compute-0 python3.9[236051]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 06 15:34:01 compute-0 openstack_network_exporter[205258]: ERROR   15:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:34:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:34:01 compute-0 openstack_network_exporter[205258]: ERROR   15:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:34:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:34:01 compute-0 sudo[236049]: pam_unix(sudo:session): session closed for user root
Jan 06 15:34:02 compute-0 sudo[236133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vavdhvarbsdilzaxhflmvxhkdcbziyvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713640.5229118-37-216948261502006/AnsiballZ_dnf.py'
Jan 06 15:34:02 compute-0 sudo[236133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:34:02 compute-0 python3.9[236135]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 06 15:34:03 compute-0 podman[236138]: 2026-01-06 15:34:03.866812802 +0000 UTC m=+0.116987599 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 06 15:34:04 compute-0 sudo[236133]: pam_unix(sudo:session): session closed for user root
Jan 06 15:34:05 compute-0 podman[236284]: 2026-01-06 15:34:05.798052158 +0000 UTC m=+0.104971193 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224)
Jan 06 15:34:05 compute-0 sudo[236327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbuvbuvfpyosvazrlfsryipuruwycign ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713645.0848563-49-269790166576294/AnsiballZ_stat.py'
Jan 06 15:34:05 compute-0 sudo[236327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:34:05 compute-0 python3.9[236330]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:34:06 compute-0 sudo[236327]: pam_unix(sudo:session): session closed for user root
Jan 06 15:34:06 compute-0 sudo[236451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbnmiemwxlfzocutfemiaychnnpgtmhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713645.0848563-49-269790166576294/AnsiballZ_copy.py'
Jan 06 15:34:06 compute-0 sudo[236451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:34:06 compute-0 python3.9[236453]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767713645.0848563-49-269790166576294/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:34:06 compute-0 sudo[236451]: pam_unix(sudo:session): session closed for user root
Jan 06 15:34:07 compute-0 podman[236569]: 2026-01-06 15:34:07.834037128 +0000 UTC m=+0.100217727 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vendor=Red Hat, Inc., container_name=openstack_network_exporter, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, version=9.6, vcs-type=git, config_id=openstack_network_exporter)
Jan 06 15:34:07 compute-0 sudo[236624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uifjfuinatqrkhdgknrlvamhyxqrwpax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713647.18781-64-78543676332402/AnsiballZ_file.py'
Jan 06 15:34:07 compute-0 sudo[236624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:34:08 compute-0 python3.9[236626]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:34:08 compute-0 sudo[236624]: pam_unix(sudo:session): session closed for user root
Jan 06 15:34:08 compute-0 sudo[236777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufstfvtlfqtaviqmtskibwniqidbupsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713648.2737856-72-254145412128399/AnsiballZ_stat.py'
Jan 06 15:34:08 compute-0 sudo[236777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:34:08 compute-0 python3.9[236779]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 06 15:34:08 compute-0 sudo[236777]: pam_unix(sudo:session): session closed for user root
Jan 06 15:34:09 compute-0 sudo[236900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zigytudkhltzwolrvijojfjatbzhswcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713648.2737856-72-254145412128399/AnsiballZ_copy.py'
Jan 06 15:34:09 compute-0 sudo[236900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:34:09 compute-0 python3.9[236902]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767713648.2737856-72-254145412128399/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 06 15:34:09 compute-0 sudo[236900]: pam_unix(sudo:session): session closed for user root
Jan 06 15:34:10 compute-0 sudo[237052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tldpjuzipratulpbevdboqnybkakcsnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767713649.9795547-87-274186937154995/AnsiballZ_systemd.py'
Jan 06 15:34:10 compute-0 sudo[237052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:34:10 compute-0 python3.9[237054]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 06 15:34:10 compute-0 systemd[1]: Stopping System Logging Service...
Jan 06 15:34:11 compute-0 rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] exiting on signal 15.
Jan 06 15:34:11 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Jan 06 15:34:11 compute-0 systemd[1]: Stopped System Logging Service.
Jan 06 15:34:11 compute-0 systemd[1]: rsyslog.service: Consumed 4.320s CPU time, 7.9M memory peak, read 0B from disk, written 6.7M to disk.
Jan 06 15:34:11 compute-0 systemd[1]: Starting System Logging Service...
Jan 06 15:34:11 compute-0 rsyslogd[237058]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="237058" x-info="https://www.rsyslog.com"] start
Jan 06 15:34:11 compute-0 systemd[1]: Started System Logging Service.
Jan 06 15:34:11 compute-0 rsyslogd[237058]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 06 15:34:11 compute-0 rsyslogd[237058]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Jan 06 15:34:11 compute-0 rsyslogd[237058]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Jan 06 15:34:11 compute-0 rsyslogd[237058]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Jan 06 15:34:11 compute-0 sudo[237052]: pam_unix(sudo:session): session closed for user root
Jan 06 15:34:11 compute-0 rsyslogd[237058]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Jan 06 15:34:11 compute-0 sshd-session[235592]: Connection closed by 192.168.122.30 port 37338
Jan 06 15:34:11 compute-0 sshd-session[235589]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:34:11 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Jan 06 15:34:11 compute-0 systemd[1]: session-28.scope: Consumed 11.447s CPU time.
Jan 06 15:34:11 compute-0 systemd-logind[791]: Session 28 logged out. Waiting for processes to exit.
Jan 06 15:34:11 compute-0 systemd-logind[791]: Removed session 28.
Jan 06 15:34:13 compute-0 podman[237087]: 2026-01-06 15:34:13.811710065 +0000 UTC m=+0.082687431 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:34:14 compute-0 podman[237106]: 2026-01-06 15:34:14.801021605 +0000 UTC m=+0.093934454 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 15:34:16 compute-0 podman[237131]: 2026-01-06 15:34:16.842921451 +0000 UTC m=+0.108192885 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, container_name=kepler, architecture=x86_64, config_id=kepler, io.openshift.expose-services=, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 15:34:23 compute-0 podman[237152]: 2026-01-06 15:34:23.874405708 +0000 UTC m=+0.120369222 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 15:34:23 compute-0 podman[237151]: 2026-01-06 15:34:23.939230334 +0000 UTC m=+0.193652778 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:34:29 compute-0 nova_compute[185513]: 2026-01-06 15:34:29.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:34:29 compute-0 podman[201918]: time="2026-01-06T15:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:34:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:34:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3869 "" "Go-http-client/1.1"
Jan 06 15:34:30 compute-0 nova_compute[185513]: 2026-01-06 15:34:30.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:34:31 compute-0 nova_compute[185513]: 2026-01-06 15:34:31.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:34:31 compute-0 nova_compute[185513]: 2026-01-06 15:34:31.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:34:31 compute-0 nova_compute[185513]: 2026-01-06 15:34:31.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:34:31 compute-0 nova_compute[185513]: 2026-01-06 15:34:31.059 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:34:31 compute-0 nova_compute[185513]: 2026-01-06 15:34:31.061 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:34:31 compute-0 nova_compute[185513]: 2026-01-06 15:34:31.062 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:34:31 compute-0 openstack_network_exporter[205258]: ERROR   15:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:34:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:34:31 compute-0 openstack_network_exporter[205258]: ERROR   15:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:34:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:34:32 compute-0 nova_compute[185513]: 2026-01-06 15:34:32.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:34:32 compute-0 nova_compute[185513]: 2026-01-06 15:34:32.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:34:32 compute-0 nova_compute[185513]: 2026-01-06 15:34:32.250 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:34:32 compute-0 nova_compute[185513]: 2026-01-06 15:34:32.252 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:34:32 compute-0 nova_compute[185513]: 2026-01-06 15:34:32.252 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:34:32 compute-0 nova_compute[185513]: 2026-01-06 15:34:32.253 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:34:32 compute-0 nova_compute[185513]: 2026-01-06 15:34:32.811 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:34:32 compute-0 nova_compute[185513]: 2026-01-06 15:34:32.812 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5725MB free_disk=72.477294921875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:34:32 compute-0 nova_compute[185513]: 2026-01-06 15:34:32.813 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:34:32 compute-0 nova_compute[185513]: 2026-01-06 15:34:32.813 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:34:32 compute-0 nova_compute[185513]: 2026-01-06 15:34:32.962 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:34:32 compute-0 nova_compute[185513]: 2026-01-06 15:34:32.963 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:34:33 compute-0 nova_compute[185513]: 2026-01-06 15:34:33.006 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:34:33 compute-0 nova_compute[185513]: 2026-01-06 15:34:33.033 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:34:33 compute-0 nova_compute[185513]: 2026-01-06 15:34:33.036 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:34:33 compute-0 nova_compute[185513]: 2026-01-06 15:34:33.036 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:34:34 compute-0 nova_compute[185513]: 2026-01-06 15:34:34.035 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:34:34 compute-0 nova_compute[185513]: 2026-01-06 15:34:34.036 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:34:34 compute-0 nova_compute[185513]: 2026-01-06 15:34:34.037 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:34:34 compute-0 podman[237200]: 2026-01-06 15:34:34.882261492 +0000 UTC m=+0.138503133 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 06 15:34:36 compute-0 podman[237218]: 2026-01-06 15:34:36.862470154 +0000 UTC m=+0.116785048 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 06 15:34:37 compute-0 nova_compute[185513]: 2026-01-06 15:34:37.020 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:34:38 compute-0 podman[237237]: 2026-01-06 15:34:38.85547205 +0000 UTC m=+0.114480639 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, vcs-type=git, config_id=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7)
Jan 06 15:34:44 compute-0 podman[237258]: 2026-01-06 15:34:44.784935114 +0000 UTC m=+0.099955780 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 06 15:34:45 compute-0 podman[237278]: 2026-01-06 15:34:45.859111542 +0000 UTC m=+0.120040063 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 15:34:47 compute-0 podman[237301]: 2026-01-06 15:34:47.860848863 +0000 UTC m=+0.129638822 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, version=9.4, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, io.openshift.tags=base rhel9, vcs-type=git, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 06 15:34:50 compute-0 sshd-session[237321]: Accepted publickey for zuul from 38.102.83.46 port 39558 ssh2: RSA SHA256:/tsYtTPHPswvCHUyDjuXJcnXXQRlaCz6QYAgaouSN5U
Jan 06 15:34:50 compute-0 systemd-logind[791]: New session 29 of user zuul.
Jan 06 15:34:50 compute-0 systemd[1]: Started Session 29 of User zuul.
Jan 06 15:34:50 compute-0 sshd-session[237321]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 15:34:52 compute-0 python3[237498]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:34:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:34:53.672 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:34:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:34:53.674 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:34:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:34:53.674 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:34:53 compute-0 sudo[237719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqgsdrfestsotbejrtbfngjbyzoyqwdu ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767713693.3589177-36829-243478188060457/AnsiballZ_command.py'
Jan 06 15:34:53 compute-0 sudo[237719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:34:54 compute-0 podman[237721]: 2026-01-06 15:34:54.060569168 +0000 UTC m=+0.093679117 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 15:34:54 compute-0 python3[237728]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:34:54 compute-0 podman[237722]: 2026-01-06 15:34:54.186205905 +0000 UTC m=+0.205341381 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:34:54 compute-0 sudo[237719]: pam_unix(sudo:session): session closed for user root
Jan 06 15:34:55 compute-0 sudo[237918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfnmflnhbxskelyufqamsawfvmzxljwa ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767713694.5530908-36840-151344983586851/AnsiballZ_command.py'
Jan 06 15:34:55 compute-0 sudo[237918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:34:55 compute-0 python3[237920]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "nova_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:34:56 compute-0 sudo[237918]: pam_unix(sudo:session): session closed for user root
Jan 06 15:34:58 compute-0 python3[238071]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 06 15:34:59 compute-0 sudo[238222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfimafyeodwmkxhuksiqefjbuhyrfbnz ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767713698.5993466-36884-257075515815457/AnsiballZ_setup.py'
Jan 06 15:34:59 compute-0 sudo[238222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:34:59 compute-0 python3[238224]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 06 15:34:59 compute-0 podman[201918]: time="2026-01-06T15:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:34:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:34:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3871 "" "Go-http-client/1.1"
Jan 06 15:35:00 compute-0 sudo[238222]: pam_unix(sudo:session): session closed for user root
Jan 06 15:35:01 compute-0 openstack_network_exporter[205258]: ERROR   15:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:35:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:35:01 compute-0 openstack_network_exporter[205258]: ERROR   15:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:35:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:35:01 compute-0 sudo[238447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxvauwqgtmyuxnanwpzwerxvmxfhrlne ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767713701.1901233-36913-127496605908757/AnsiballZ_command.py'
Jan 06 15:35:01 compute-0 sudo[238447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:35:01 compute-0 python3[238449]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:35:01 compute-0 sudo[238447]: pam_unix(sudo:session): session closed for user root
Jan 06 15:35:02 compute-0 sudo[238611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwovoonswttbttirburczqlwaadmdivl ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767713702.315795-36930-83662124503899/AnsiballZ_command.py'
Jan 06 15:35:02 compute-0 sudo[238611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 15:35:02 compute-0 python3[238613]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 15:35:03 compute-0 sudo[238611]: pam_unix(sudo:session): session closed for user root
Jan 06 15:35:05 compute-0 podman[238652]: 2026-01-06 15:35:05.860698118 +0000 UTC m=+0.110810663 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 06 15:35:07 compute-0 podman[238672]: 2026-01-06 15:35:07.84282122 +0000 UTC m=+0.105753321 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:35:09 compute-0 podman[238692]: 2026-01-06 15:35:09.83758287 +0000 UTC m=+0.107893207 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, managed_by=edpm_ansible, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 06 15:35:15 compute-0 podman[238713]: 2026-01-06 15:35:15.847797777 +0000 UTC m=+0.087583023 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi)
Jan 06 15:35:16 compute-0 podman[238732]: 2026-01-06 15:35:16.79866072 +0000 UTC m=+0.070224234 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 15:35:16 compute-0 sshd-session[238756]: Connection closed by 178.128.171.80 port 55142
Jan 06 15:35:18 compute-0 podman[238757]: 2026-01-06 15:35:18.839736275 +0000 UTC m=+0.105321201 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, version=9.4, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, managed_by=edpm_ansible, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, maintainer=Red Hat, Inc.)
Jan 06 15:35:24 compute-0 podman[238777]: 2026-01-06 15:35:24.853767339 +0000 UTC m=+0.119892517 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 15:35:24 compute-0 podman[238776]: 2026-01-06 15:35:24.886791721 +0000 UTC m=+0.146296578 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 06 15:35:29 compute-0 nova_compute[185513]: 2026-01-06 15:35:29.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:35:29 compute-0 podman[201918]: time="2026-01-06T15:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:35:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:35:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3873 "" "Go-http-client/1.1"
Jan 06 15:35:30 compute-0 nova_compute[185513]: 2026-01-06 15:35:30.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:35:31 compute-0 nova_compute[185513]: 2026-01-06 15:35:31.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:35:31 compute-0 nova_compute[185513]: 2026-01-06 15:35:31.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:35:31 compute-0 nova_compute[185513]: 2026-01-06 15:35:31.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:35:31 compute-0 nova_compute[185513]: 2026-01-06 15:35:31.050 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:35:31 compute-0 nova_compute[185513]: 2026-01-06 15:35:31.051 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:35:31 compute-0 openstack_network_exporter[205258]: ERROR   15:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:35:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:35:31 compute-0 openstack_network_exporter[205258]: ERROR   15:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:35:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:35:32 compute-0 nova_compute[185513]: 2026-01-06 15:35:32.045 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:35:33 compute-0 nova_compute[185513]: 2026-01-06 15:35:33.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:35:33 compute-0 nova_compute[185513]: 2026-01-06 15:35:33.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:35:33 compute-0 nova_compute[185513]: 2026-01-06 15:35:33.067 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:35:33 compute-0 nova_compute[185513]: 2026-01-06 15:35:33.068 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.067 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 15:35:33 compute-0 nova_compute[185513]: 2026-01-06 15:35:33.068 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.068 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 nova_compute[185513]: 2026-01-06 15:35:33.069 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.086 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.087 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.089 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.090 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:35:33.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:35:33 compute-0 nova_compute[185513]: 2026-01-06 15:35:33.508 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:35:33 compute-0 nova_compute[185513]: 2026-01-06 15:35:33.509 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5721MB free_disk=72.4769172668457GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:35:33 compute-0 nova_compute[185513]: 2026-01-06 15:35:33.510 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:35:33 compute-0 nova_compute[185513]: 2026-01-06 15:35:33.510 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:35:33 compute-0 nova_compute[185513]: 2026-01-06 15:35:33.602 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:35:33 compute-0 nova_compute[185513]: 2026-01-06 15:35:33.602 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:35:33 compute-0 nova_compute[185513]: 2026-01-06 15:35:33.626 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:35:33 compute-0 nova_compute[185513]: 2026-01-06 15:35:33.648 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:35:33 compute-0 nova_compute[185513]: 2026-01-06 15:35:33.649 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:35:33 compute-0 nova_compute[185513]: 2026-01-06 15:35:33.649 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:35:34 compute-0 nova_compute[185513]: 2026-01-06 15:35:34.649 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:35:34 compute-0 nova_compute[185513]: 2026-01-06 15:35:34.649 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:35:34 compute-0 nova_compute[185513]: 2026-01-06 15:35:34.649 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:35:36 compute-0 podman[238827]: 2026-01-06 15:35:36.837330627 +0000 UTC m=+0.097020166 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:35:38 compute-0 podman[238844]: 2026-01-06 15:35:38.807897131 +0000 UTC m=+0.078347484 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, config_id=ceilometer_agent_compute)
Jan 06 15:35:40 compute-0 podman[238863]: 2026-01-06 15:35:40.815775458 +0000 UTC m=+0.085496709 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, release=1755695350, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Jan 06 15:35:46 compute-0 podman[238885]: 2026-01-06 15:35:46.857381135 +0000 UTC m=+0.107839985 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 06 15:35:46 compute-0 podman[238904]: 2026-01-06 15:35:46.989809235 +0000 UTC m=+0.093778523 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 06 15:35:49 compute-0 podman[238929]: 2026-01-06 15:35:49.800636046 +0000 UTC m=+0.071803415 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, container_name=kepler, io.openshift.expose-services=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=kepler, managed_by=edpm_ansible, vcs-type=git, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, architecture=x86_64)
Jan 06 15:35:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:35:53.673 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:35:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:35:53.674 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:35:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:35:53.674 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:35:55 compute-0 podman[238950]: 2026-01-06 15:35:55.848727318 +0000 UTC m=+0.098469063 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 15:35:55 compute-0 podman[238949]: 2026-01-06 15:35:55.889876221 +0000 UTC m=+0.147487600 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 06 15:35:59 compute-0 podman[201918]: time="2026-01-06T15:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:35:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:35:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3869 "" "Go-http-client/1.1"
Jan 06 15:36:01 compute-0 openstack_network_exporter[205258]: ERROR   15:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:36:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:36:01 compute-0 openstack_network_exporter[205258]: ERROR   15:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:36:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:36:02 compute-0 sshd-session[237324]: Received disconnect from 38.102.83.46 port 39558:11: disconnected by user
Jan 06 15:36:02 compute-0 sshd-session[237324]: Disconnected from user zuul 38.102.83.46 port 39558
Jan 06 15:36:02 compute-0 sshd-session[237321]: pam_unix(sshd:session): session closed for user zuul
Jan 06 15:36:02 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Jan 06 15:36:02 compute-0 systemd[1]: session-29.scope: Consumed 10.372s CPU time.
Jan 06 15:36:02 compute-0 systemd-logind[791]: Session 29 logged out. Waiting for processes to exit.
Jan 06 15:36:02 compute-0 systemd-logind[791]: Removed session 29.
Jan 06 15:36:07 compute-0 podman[238998]: 2026-01-06 15:36:07.892542685 +0000 UTC m=+0.143960738 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 06 15:36:09 compute-0 podman[239017]: 2026-01-06 15:36:09.826868343 +0000 UTC m=+0.082554893 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251224, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Jan 06 15:36:11 compute-0 podman[239036]: 2026-01-06 15:36:11.872991228 +0000 UTC m=+0.119261510 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 06 15:36:17 compute-0 podman[239058]: 2026-01-06 15:36:17.852764928 +0000 UTC m=+0.116899800 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:36:17 compute-0 podman[239059]: 2026-01-06 15:36:17.88768844 +0000 UTC m=+0.131436765 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 15:36:20 compute-0 podman[239099]: 2026-01-06 15:36:20.857606888 +0000 UTC m=+0.117788152 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, release=1214.1726694543, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30)
Jan 06 15:36:26 compute-0 podman[239119]: 2026-01-06 15:36:26.814723834 +0000 UTC m=+0.069197948 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 15:36:26 compute-0 podman[239118]: 2026-01-06 15:36:26.865451974 +0000 UTC m=+0.136128186 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:36:29 compute-0 podman[201918]: time="2026-01-06T15:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:36:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:36:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3882 "" "Go-http-client/1.1"
Jan 06 15:36:30 compute-0 nova_compute[185513]: 2026-01-06 15:36:30.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:36:31 compute-0 nova_compute[185513]: 2026-01-06 15:36:31.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:36:31 compute-0 openstack_network_exporter[205258]: ERROR   15:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:36:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:36:31 compute-0 openstack_network_exporter[205258]: ERROR   15:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:36:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.020 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.022 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.022 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.040 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.041 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.042 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.042 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.070 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.071 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.071 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.072 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.436 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.447 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5702MB free_disk=72.47693634033203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.448 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.448 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.528 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.529 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.555 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.571 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.573 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:36:33 compute-0 nova_compute[185513]: 2026-01-06 15:36:33.573 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:36:35 compute-0 nova_compute[185513]: 2026-01-06 15:36:35.554 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:36:35 compute-0 nova_compute[185513]: 2026-01-06 15:36:35.555 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:36:36 compute-0 nova_compute[185513]: 2026-01-06 15:36:36.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:36:37 compute-0 nova_compute[185513]: 2026-01-06 15:36:37.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:36:38 compute-0 podman[239167]: 2026-01-06 15:36:38.830341129 +0000 UTC m=+0.092445188 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:36:40 compute-0 podman[239186]: 2026-01-06 15:36:40.836635106 +0000 UTC m=+0.088947048 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 06 15:36:42 compute-0 podman[239206]: 2026-01-06 15:36:42.85021685 +0000 UTC m=+0.107222699 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=openstack_network_exporter, version=9.6)
Jan 06 15:36:48 compute-0 podman[239226]: 2026-01-06 15:36:48.837394541 +0000 UTC m=+0.098367261 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:36:48 compute-0 podman[239227]: 2026-01-06 15:36:48.852189513 +0000 UTC m=+0.094182853 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 06 15:36:51 compute-0 podman[239267]: 2026-01-06 15:36:51.855578347 +0000 UTC m=+0.114380715 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.openshift.expose-services=, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., container_name=kepler, release-0.7.12=, distribution-scope=public, io.openshift.tags=base rhel9)
Jan 06 15:36:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:36:53.674 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:36:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:36:53.675 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:36:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:36:53.676 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:36:57 compute-0 podman[239285]: 2026-01-06 15:36:57.881549749 +0000 UTC m=+0.130703426 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 06 15:36:57 compute-0 podman[239284]: 2026-01-06 15:36:57.941663411 +0000 UTC m=+0.201461973 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 06 15:36:59 compute-0 podman[201918]: time="2026-01-06T15:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:36:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:36:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3882 "" "Go-http-client/1.1"
Jan 06 15:37:01 compute-0 openstack_network_exporter[205258]: ERROR   15:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:37:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:37:01 compute-0 openstack_network_exporter[205258]: ERROR   15:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:37:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:37:09 compute-0 podman[239333]: 2026-01-06 15:37:09.84144543 +0000 UTC m=+0.103762381 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 06 15:37:11 compute-0 podman[239351]: 2026-01-06 15:37:11.872591098 +0000 UTC m=+0.139352410 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 06 15:37:13 compute-0 podman[239369]: 2026-01-06 15:37:13.806681739 +0000 UTC m=+0.071880317 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-type=git, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Jan 06 15:37:19 compute-0 podman[239392]: 2026-01-06 15:37:19.866245002 +0000 UTC m=+0.109848618 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 15:37:19 compute-0 podman[239391]: 2026-01-06 15:37:19.866440607 +0000 UTC m=+0.120497533 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 06 15:37:22 compute-0 podman[239434]: 2026-01-06 15:37:22.855285126 +0000 UTC m=+0.123478604 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, version=9.4, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 15:37:28 compute-0 podman[239454]: 2026-01-06 15:37:28.842638314 +0000 UTC m=+0.084339076 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 15:37:28 compute-0 podman[239453]: 2026-01-06 15:37:28.88974709 +0000 UTC m=+0.149884671 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 06 15:37:29 compute-0 podman[201918]: time="2026-01-06T15:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:37:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:37:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3877 "" "Go-http-client/1.1"
Jan 06 15:37:31 compute-0 nova_compute[185513]: 2026-01-06 15:37:31.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:37:31 compute-0 openstack_network_exporter[205258]: ERROR   15:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:37:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:37:31 compute-0 openstack_network_exporter[205258]: ERROR   15:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:37:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:37:32 compute-0 nova_compute[185513]: 2026-01-06 15:37:32.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:37:33 compute-0 nova_compute[185513]: 2026-01-06 15:37:33.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:37:33 compute-0 nova_compute[185513]: 2026-01-06 15:37:33.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:37:33 compute-0 nova_compute[185513]: 2026-01-06 15:37:33.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:37:33 compute-0 nova_compute[185513]: 2026-01-06 15:37:33.053 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:37:33 compute-0 nova_compute[185513]: 2026-01-06 15:37:33.053 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:37:33 compute-0 nova_compute[185513]: 2026-01-06 15:37:33.053 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.068 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.068 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.082 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.086 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.087 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.089 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.090 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:37:33.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:37:35 compute-0 nova_compute[185513]: 2026-01-06 15:37:35.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:37:35 compute-0 nova_compute[185513]: 2026-01-06 15:37:35.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:37:35 compute-0 nova_compute[185513]: 2026-01-06 15:37:35.072 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:37:35 compute-0 nova_compute[185513]: 2026-01-06 15:37:35.073 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:37:35 compute-0 nova_compute[185513]: 2026-01-06 15:37:35.073 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:37:35 compute-0 nova_compute[185513]: 2026-01-06 15:37:35.073 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:37:35 compute-0 nova_compute[185513]: 2026-01-06 15:37:35.621 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:37:35 compute-0 nova_compute[185513]: 2026-01-06 15:37:35.623 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5705MB free_disk=72.47693634033203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:37:35 compute-0 nova_compute[185513]: 2026-01-06 15:37:35.623 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:37:35 compute-0 nova_compute[185513]: 2026-01-06 15:37:35.624 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:37:35 compute-0 nova_compute[185513]: 2026-01-06 15:37:35.705 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:37:35 compute-0 nova_compute[185513]: 2026-01-06 15:37:35.706 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:37:35 compute-0 nova_compute[185513]: 2026-01-06 15:37:35.745 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:37:35 compute-0 nova_compute[185513]: 2026-01-06 15:37:35.767 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:37:35 compute-0 nova_compute[185513]: 2026-01-06 15:37:35.768 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:37:35 compute-0 nova_compute[185513]: 2026-01-06 15:37:35.769 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:37:37 compute-0 nova_compute[185513]: 2026-01-06 15:37:37.768 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:37:37 compute-0 nova_compute[185513]: 2026-01-06 15:37:37.769 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:37:37 compute-0 nova_compute[185513]: 2026-01-06 15:37:37.770 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:37:40 compute-0 podman[239502]: 2026-01-06 15:37:40.841430068 +0000 UTC m=+0.099757196 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 06 15:37:42 compute-0 podman[239520]: 2026-01-06 15:37:42.82656658 +0000 UTC m=+0.086590424 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 06 15:37:44 compute-0 podman[239540]: 2026-01-06 15:37:44.789763812 +0000 UTC m=+0.117305273 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, name=ubi9-minimal, io.openshift.expose-services=)
Jan 06 15:37:50 compute-0 podman[239565]: 2026-01-06 15:37:50.830709045 +0000 UTC m=+0.096861541 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 15:37:50 compute-0 podman[239564]: 2026-01-06 15:37:50.845841159 +0000 UTC m=+0.110244530 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi)
Jan 06 15:37:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:37:53.676 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:37:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:37:53.677 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:37:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:37:53.678 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:37:53 compute-0 podman[239609]: 2026-01-06 15:37:53.847279165 +0000 UTC m=+0.112843777 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, vcs-type=git, architecture=x86_64, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, distribution-scope=public, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, name=ubi9, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, container_name=kepler, maintainer=Red Hat, Inc.)
Jan 06 15:37:59 compute-0 podman[201918]: time="2026-01-06T15:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:37:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:37:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3880 "" "Go-http-client/1.1"
Jan 06 15:37:59 compute-0 podman[239629]: 2026-01-06 15:37:59.842617082 +0000 UTC m=+0.097653292 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 15:37:59 compute-0 podman[239628]: 2026-01-06 15:37:59.863228958 +0000 UTC m=+0.134125851 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 06 15:38:01 compute-0 openstack_network_exporter[205258]: ERROR   15:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:38:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:38:01 compute-0 openstack_network_exporter[205258]: ERROR   15:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:38:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:38:11 compute-0 podman[239675]: 2026-01-06 15:38:11.858509327 +0000 UTC m=+0.115116206 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 06 15:38:13 compute-0 podman[239693]: 2026-01-06 15:38:13.828984619 +0000 UTC m=+0.093446323 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute)
Jan 06 15:38:15 compute-0 podman[239711]: 2026-01-06 15:38:15.828900555 +0000 UTC m=+0.100337952 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, version=9.6, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 15:38:21 compute-0 podman[239734]: 2026-01-06 15:38:21.886628573 +0000 UTC m=+0.136248486 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 15:38:21 compute-0 podman[239733]: 2026-01-06 15:38:21.89189302 +0000 UTC m=+0.145669041 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Jan 06 15:38:24 compute-0 podman[239775]: 2026-01-06 15:38:24.909796865 +0000 UTC m=+0.168136976 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, release-0.7.12=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Jan 06 15:38:28 compute-0 nova_compute[185513]: 2026-01-06 15:38:28.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:38:28 compute-0 nova_compute[185513]: 2026-01-06 15:38:28.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 06 15:38:29 compute-0 nova_compute[185513]: 2026-01-06 15:38:29.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:38:29 compute-0 podman[201918]: time="2026-01-06T15:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:38:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:38:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3876 "" "Go-http-client/1.1"
Jan 06 15:38:30 compute-0 podman[239796]: 2026-01-06 15:38:30.841095174 +0000 UTC m=+0.093126944 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 15:38:30 compute-0 podman[239795]: 2026-01-06 15:38:30.944106634 +0000 UTC m=+0.202505890 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 06 15:38:31 compute-0 nova_compute[185513]: 2026-01-06 15:38:31.036 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:38:31 compute-0 openstack_network_exporter[205258]: ERROR   15:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:38:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:38:31 compute-0 openstack_network_exporter[205258]: ERROR   15:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:38:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:38:33 compute-0 nova_compute[185513]: 2026-01-06 15:38:33.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:38:33 compute-0 nova_compute[185513]: 2026-01-06 15:38:33.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:38:33 compute-0 nova_compute[185513]: 2026-01-06 15:38:33.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 06 15:38:33 compute-0 nova_compute[185513]: 2026-01-06 15:38:33.052 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 06 15:38:34 compute-0 nova_compute[185513]: 2026-01-06 15:38:34.053 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:38:35 compute-0 nova_compute[185513]: 2026-01-06 15:38:35.019 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:38:35 compute-0 nova_compute[185513]: 2026-01-06 15:38:35.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:38:35 compute-0 nova_compute[185513]: 2026-01-06 15:38:35.022 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:38:35 compute-0 nova_compute[185513]: 2026-01-06 15:38:35.022 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:38:35 compute-0 nova_compute[185513]: 2026-01-06 15:38:35.040 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:38:35 compute-0 nova_compute[185513]: 2026-01-06 15:38:35.041 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:38:37 compute-0 nova_compute[185513]: 2026-01-06 15:38:37.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:38:37 compute-0 nova_compute[185513]: 2026-01-06 15:38:37.060 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:38:37 compute-0 nova_compute[185513]: 2026-01-06 15:38:37.061 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:38:37 compute-0 nova_compute[185513]: 2026-01-06 15:38:37.061 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:38:37 compute-0 nova_compute[185513]: 2026-01-06 15:38:37.062 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:38:37 compute-0 nova_compute[185513]: 2026-01-06 15:38:37.503 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:38:37 compute-0 nova_compute[185513]: 2026-01-06 15:38:37.505 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5710MB free_disk=72.47980117797852GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:38:37 compute-0 nova_compute[185513]: 2026-01-06 15:38:37.506 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:38:37 compute-0 nova_compute[185513]: 2026-01-06 15:38:37.507 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:38:37 compute-0 nova_compute[185513]: 2026-01-06 15:38:37.770 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:38:37 compute-0 nova_compute[185513]: 2026-01-06 15:38:37.771 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:38:37 compute-0 sshd-session[239840]: Invalid user sol from 178.128.171.80 port 53508
Jan 06 15:38:37 compute-0 nova_compute[185513]: 2026-01-06 15:38:37.844 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing inventories for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 06 15:38:37 compute-0 sshd-session[239840]: Connection closed by invalid user sol 178.128.171.80 port 53508 [preauth]
Jan 06 15:38:37 compute-0 nova_compute[185513]: 2026-01-06 15:38:37.948 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating ProviderTree inventory for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 06 15:38:37 compute-0 nova_compute[185513]: 2026-01-06 15:38:37.949 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating inventory in ProviderTree for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 06 15:38:37 compute-0 nova_compute[185513]: 2026-01-06 15:38:37.964 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing aggregate associations for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 06 15:38:37 compute-0 nova_compute[185513]: 2026-01-06 15:38:37.991 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing trait associations for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE41,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 06 15:38:38 compute-0 nova_compute[185513]: 2026-01-06 15:38:38.013 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:38:38 compute-0 nova_compute[185513]: 2026-01-06 15:38:38.025 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:38:38 compute-0 nova_compute[185513]: 2026-01-06 15:38:38.027 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:38:38 compute-0 nova_compute[185513]: 2026-01-06 15:38:38.027 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:38:39 compute-0 nova_compute[185513]: 2026-01-06 15:38:39.028 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:38:39 compute-0 nova_compute[185513]: 2026-01-06 15:38:39.029 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:38:39 compute-0 nova_compute[185513]: 2026-01-06 15:38:39.030 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:38:40 compute-0 nova_compute[185513]: 2026-01-06 15:38:40.021 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:38:42 compute-0 podman[239842]: 2026-01-06 15:38:42.832608898 +0000 UTC m=+0.096089411 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 06 15:38:44 compute-0 podman[239860]: 2026-01-06 15:38:44.783530179 +0000 UTC m=+0.091792479 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute)
Jan 06 15:38:46 compute-0 podman[239880]: 2026-01-06 15:38:46.810247343 +0000 UTC m=+0.079228092 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-type=git, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container)
Jan 06 15:38:52 compute-0 podman[239903]: 2026-01-06 15:38:52.838385883 +0000 UTC m=+0.091387838 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 06 15:38:52 compute-0 podman[239902]: 2026-01-06 15:38:52.838386283 +0000 UTC m=+0.104172011 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 06 15:38:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:38:53.678 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:38:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:38:53.678 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:38:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:38:53.678 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:38:55 compute-0 podman[239943]: 2026-01-06 15:38:55.859601984 +0000 UTC m=+0.113177165 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=kepler, io.openshift.tags=base rhel9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, container_name=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Jan 06 15:38:59 compute-0 podman[201918]: time="2026-01-06T15:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:38:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:38:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3882 "" "Go-http-client/1.1"
Jan 06 15:39:01 compute-0 openstack_network_exporter[205258]: ERROR   15:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:39:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:39:01 compute-0 openstack_network_exporter[205258]: ERROR   15:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:39:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:39:01 compute-0 podman[239964]: 2026-01-06 15:39:01.800838862 +0000 UTC m=+0.058928994 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 15:39:01 compute-0 podman[239963]: 2026-01-06 15:39:01.83919764 +0000 UTC m=+0.104674555 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 06 15:39:13 compute-0 podman[240011]: 2026-01-06 15:39:13.826482272 +0000 UTC m=+0.099491070 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:39:15 compute-0 podman[240029]: 2026-01-06 15:39:15.836182933 +0000 UTC m=+0.101903473 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 06 15:39:17 compute-0 podman[240049]: 2026-01-06 15:39:17.867179989 +0000 UTC m=+0.120916497 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-type=git)
Jan 06 15:39:20 compute-0 nova_compute[185513]: 2026-01-06 15:39:20.649 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:39:23 compute-0 podman[240071]: 2026-01-06 15:39:23.840946803 +0000 UTC m=+0.096768669 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 06 15:39:23 compute-0 podman[240072]: 2026-01-06 15:39:23.855334847 +0000 UTC m=+0.102935559 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 06 15:39:26 compute-0 podman[240112]: 2026-01-06 15:39:26.828325453 +0000 UTC m=+0.098963046 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, build-date=2024-09-18T21:23:30, name=ubi9, io.openshift.tags=base rhel9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=9.4)
Jan 06 15:39:29 compute-0 podman[201918]: time="2026-01-06T15:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:39:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:39:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3879 "" "Go-http-client/1.1"
Jan 06 15:39:31 compute-0 nova_compute[185513]: 2026-01-06 15:39:31.047 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:39:31 compute-0 openstack_network_exporter[205258]: ERROR   15:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:39:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:39:31 compute-0 openstack_network_exporter[205258]: ERROR   15:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:39:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:39:32 compute-0 podman[240130]: 2026-01-06 15:39:32.840697406 +0000 UTC m=+0.097998686 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 15:39:32 compute-0 podman[240129]: 2026-01-06 15:39:32.869339846 +0000 UTC m=+0.134399589 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.069 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.069 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.082 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.086 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.087 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.088 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.089 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.090 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:39:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:39:34 compute-0 nova_compute[185513]: 2026-01-06 15:39:34.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:39:35 compute-0 nova_compute[185513]: 2026-01-06 15:39:35.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.022 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.041 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.042 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.043 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.125 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.127 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.127 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.128 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.542 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.543 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5712MB free_disk=72.47980117797852GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.544 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.544 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.616 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.617 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.645 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.658 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.660 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:39:37 compute-0 nova_compute[185513]: 2026-01-06 15:39:37.660 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:39:38 compute-0 nova_compute[185513]: 2026-01-06 15:39:38.640 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:39:38 compute-0 nova_compute[185513]: 2026-01-06 15:39:38.641 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:39:38 compute-0 nova_compute[185513]: 2026-01-06 15:39:38.642 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:39:44 compute-0 podman[240177]: 2026-01-06 15:39:44.787708747 +0000 UTC m=+0.111495269 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:39:46 compute-0 podman[240198]: 2026-01-06 15:39:46.825113117 +0000 UTC m=+0.098067198 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251224)
Jan 06 15:39:48 compute-0 podman[240217]: 2026-01-06 15:39:48.8206947 +0000 UTC m=+0.088848216 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, name=ubi9-minimal, managed_by=edpm_ansible, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 06 15:39:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:39:53.680 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:39:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:39:53.681 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:39:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:39:53.682 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:39:54 compute-0 podman[240237]: 2026-01-06 15:39:54.838829654 +0000 UTC m=+0.108764268 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202)
Jan 06 15:39:54 compute-0 podman[240238]: 2026-01-06 15:39:54.852931143 +0000 UTC m=+0.107066314 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 15:39:57 compute-0 podman[240281]: 2026-01-06 15:39:57.825069839 +0000 UTC m=+0.093157810 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, architecture=x86_64, release-0.7.12=, container_name=kepler, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, io.buildah.version=1.29.0, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 06 15:39:59 compute-0 podman[201918]: time="2026-01-06T15:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:39:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:39:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3887 "" "Go-http-client/1.1"
Jan 06 15:40:01 compute-0 openstack_network_exporter[205258]: ERROR   15:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:40:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:40:01 compute-0 openstack_network_exporter[205258]: ERROR   15:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:40:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:40:03 compute-0 podman[240302]: 2026-01-06 15:40:03.813155845 +0000 UTC m=+0.073891945 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 06 15:40:03 compute-0 podman[240301]: 2026-01-06 15:40:03.894826453 +0000 UTC m=+0.152847912 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 06 15:40:15 compute-0 podman[240346]: 2026-01-06 15:40:15.822753379 +0000 UTC m=+0.087017099 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 06 15:40:17 compute-0 podman[240365]: 2026-01-06 15:40:17.886758464 +0000 UTC m=+0.142410898 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.build-date=20251224)
Jan 06 15:40:19 compute-0 podman[240384]: 2026-01-06 15:40:19.882463051 +0000 UTC m=+0.139861142 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, architecture=x86_64, distribution-scope=public, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal)
Jan 06 15:40:25 compute-0 podman[240405]: 2026-01-06 15:40:25.815922749 +0000 UTC m=+0.078759012 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 06 15:40:25 compute-0 podman[240404]: 2026-01-06 15:40:25.835963154 +0000 UTC m=+0.098261773 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:40:28 compute-0 podman[240445]: 2026-01-06 15:40:28.867018372 +0000 UTC m=+0.118356209 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, architecture=x86_64, io.buildah.version=1.29.0, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9)
Jan 06 15:40:29 compute-0 podman[201918]: time="2026-01-06T15:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:40:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:40:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3879 "" "Go-http-client/1.1"
Jan 06 15:40:31 compute-0 openstack_network_exporter[205258]: ERROR   15:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:40:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:40:31 compute-0 openstack_network_exporter[205258]: ERROR   15:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:40:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:40:33 compute-0 nova_compute[185513]: 2026-01-06 15:40:33.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:40:34 compute-0 podman[240465]: 2026-01-06 15:40:34.842379445 +0000 UTC m=+0.091390634 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 06 15:40:34 compute-0 podman[240464]: 2026-01-06 15:40:34.908613438 +0000 UTC m=+0.166312664 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller)
Jan 06 15:40:35 compute-0 nova_compute[185513]: 2026-01-06 15:40:35.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:40:36 compute-0 nova_compute[185513]: 2026-01-06 15:40:36.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.044 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.046 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.046 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.076 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.077 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.078 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.079 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.520 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.521 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5689MB free_disk=72.47982025146484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.522 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.522 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.614 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.615 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.648 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.668 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.671 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:40:38 compute-0 nova_compute[185513]: 2026-01-06 15:40:38.672 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:40:40 compute-0 nova_compute[185513]: 2026-01-06 15:40:40.648 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:40:40 compute-0 nova_compute[185513]: 2026-01-06 15:40:40.649 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:40:40 compute-0 nova_compute[185513]: 2026-01-06 15:40:40.650 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:40:41 compute-0 nova_compute[185513]: 2026-01-06 15:40:41.020 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:40:46 compute-0 podman[240513]: 2026-01-06 15:40:46.84185038 +0000 UTC m=+0.101521828 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 06 15:40:48 compute-0 podman[240532]: 2026-01-06 15:40:48.845232829 +0000 UTC m=+0.111658274 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Jan 06 15:40:50 compute-0 podman[240553]: 2026-01-06 15:40:50.839693145 +0000 UTC m=+0.097866693 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., architecture=x86_64)
Jan 06 15:40:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:40:53.681 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:40:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:40:53.682 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:40:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:40:53.682 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:40:56 compute-0 podman[240574]: 2026-01-06 15:40:56.858502557 +0000 UTC m=+0.107444143 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 15:40:56 compute-0 podman[240573]: 2026-01-06 15:40:56.872254637 +0000 UTC m=+0.127654032 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 06 15:40:59 compute-0 podman[201918]: time="2026-01-06T15:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:40:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:40:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3877 "" "Go-http-client/1.1"
Jan 06 15:40:59 compute-0 podman[240614]: 2026-01-06 15:40:59.880775834 +0000 UTC m=+0.141694469 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_id=kepler, release-0.7.12=, io.buildah.version=1.29.0, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 06 15:41:01 compute-0 openstack_network_exporter[205258]: ERROR   15:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:41:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:41:01 compute-0 openstack_network_exporter[205258]: ERROR   15:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:41:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:41:05 compute-0 podman[240634]: 2026-01-06 15:41:05.844083803 +0000 UTC m=+0.103764977 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 06 15:41:05 compute-0 podman[240633]: 2026-01-06 15:41:05.917574537 +0000 UTC m=+0.171345736 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 06 15:41:17 compute-0 podman[240680]: 2026-01-06 15:41:17.862449061 +0000 UTC m=+0.117453966 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 06 15:41:19 compute-0 podman[240699]: 2026-01-06 15:41:19.844558643 +0000 UTC m=+0.106604881 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251224)
Jan 06 15:41:21 compute-0 podman[240718]: 2026-01-06 15:41:21.8525038 +0000 UTC m=+0.124870799 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public)
Jan 06 15:41:27 compute-0 podman[240739]: 2026-01-06 15:41:27.842888418 +0000 UTC m=+0.090446379 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 06 15:41:27 compute-0 podman[240740]: 2026-01-06 15:41:27.846780379 +0000 UTC m=+0.101506628 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 15:41:29 compute-0 podman[201918]: time="2026-01-06T15:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:41:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:41:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3887 "" "Go-http-client/1.1"
Jan 06 15:41:30 compute-0 podman[240781]: 2026-01-06 15:41:30.844909846 +0000 UTC m=+0.111107409 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_id=kepler, container_name=kepler, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, architecture=x86_64, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Jan 06 15:41:31 compute-0 openstack_network_exporter[205258]: ERROR   15:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:41:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:41:31 compute-0 openstack_network_exporter[205258]: ERROR   15:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:41:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.071 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.072 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.077 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.077 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.078 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.078 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.078 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.082 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.086 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.089 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.089 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.089 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.090 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:41:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:41:35 compute-0 nova_compute[185513]: 2026-01-06 15:41:35.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:41:35 compute-0 nova_compute[185513]: 2026-01-06 15:41:35.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:41:36 compute-0 podman[240803]: 2026-01-06 15:41:36.85006499 +0000 UTC m=+0.108067700 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 06 15:41:36 compute-0 podman[240802]: 2026-01-06 15:41:36.86690424 +0000 UTC m=+0.132101819 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 06 15:41:38 compute-0 nova_compute[185513]: 2026-01-06 15:41:38.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:41:38 compute-0 nova_compute[185513]: 2026-01-06 15:41:38.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:41:38 compute-0 nova_compute[185513]: 2026-01-06 15:41:38.022 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:41:38 compute-0 nova_compute[185513]: 2026-01-06 15:41:38.022 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:41:38 compute-0 nova_compute[185513]: 2026-01-06 15:41:38.038 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:41:38 compute-0 nova_compute[185513]: 2026-01-06 15:41:38.039 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:41:39 compute-0 nova_compute[185513]: 2026-01-06 15:41:39.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:41:40 compute-0 nova_compute[185513]: 2026-01-06 15:41:40.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:41:40 compute-0 nova_compute[185513]: 2026-01-06 15:41:40.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:41:40 compute-0 nova_compute[185513]: 2026-01-06 15:41:40.053 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:41:40 compute-0 nova_compute[185513]: 2026-01-06 15:41:40.054 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:41:40 compute-0 nova_compute[185513]: 2026-01-06 15:41:40.055 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:41:40 compute-0 nova_compute[185513]: 2026-01-06 15:41:40.055 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:41:40 compute-0 nova_compute[185513]: 2026-01-06 15:41:40.446 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:41:40 compute-0 nova_compute[185513]: 2026-01-06 15:41:40.448 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5694MB free_disk=72.47982025146484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:41:40 compute-0 nova_compute[185513]: 2026-01-06 15:41:40.448 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:41:40 compute-0 nova_compute[185513]: 2026-01-06 15:41:40.448 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:41:40 compute-0 nova_compute[185513]: 2026-01-06 15:41:40.555 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:41:40 compute-0 nova_compute[185513]: 2026-01-06 15:41:40.556 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:41:40 compute-0 nova_compute[185513]: 2026-01-06 15:41:40.589 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:41:40 compute-0 nova_compute[185513]: 2026-01-06 15:41:40.605 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:41:40 compute-0 nova_compute[185513]: 2026-01-06 15:41:40.607 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:41:40 compute-0 nova_compute[185513]: 2026-01-06 15:41:40.608 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:41:42 compute-0 nova_compute[185513]: 2026-01-06 15:41:42.607 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:41:42 compute-0 nova_compute[185513]: 2026-01-06 15:41:42.608 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:41:48 compute-0 podman[240849]: 2026-01-06 15:41:48.874370363 +0000 UTC m=+0.127418401 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:41:50 compute-0 podman[240869]: 2026-01-06 15:41:50.827233097 +0000 UTC m=+0.100974561 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4)
Jan 06 15:41:52 compute-0 podman[240887]: 2026-01-06 15:41:52.877423078 +0000 UTC m=+0.124658624 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_id=openstack_network_exporter, version=9.6, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1755695350)
Jan 06 15:41:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:41:53.683 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:41:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:41:53.684 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:41:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:41:53.685 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:41:58 compute-0 podman[240909]: 2026-01-06 15:41:58.809262024 +0000 UTC m=+0.080598017 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Jan 06 15:41:58 compute-0 podman[240910]: 2026-01-06 15:41:58.826546042 +0000 UTC m=+0.089502473 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 15:41:59 compute-0 podman[201918]: time="2026-01-06T15:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:41:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:41:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3885 "" "Go-http-client/1.1"
Jan 06 15:42:01 compute-0 openstack_network_exporter[205258]: ERROR   15:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:42:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:42:01 compute-0 openstack_network_exporter[205258]: ERROR   15:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:42:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:42:01 compute-0 podman[240949]: 2026-01-06 15:42:01.841436052 +0000 UTC m=+0.106828682 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, container_name=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., name=ubi9, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, distribution-scope=public)
Jan 06 15:42:07 compute-0 podman[240970]: 2026-01-06 15:42:07.836355342 +0000 UTC m=+0.097271548 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 15:42:07 compute-0 podman[240969]: 2026-01-06 15:42:07.935516491 +0000 UTC m=+0.190418101 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 06 15:42:19 compute-0 podman[241018]: 2026-01-06 15:42:19.809194381 +0000 UTC m=+0.080436264 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:42:21 compute-0 podman[241035]: 2026-01-06 15:42:21.852693658 +0000 UTC m=+0.114389420 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 06 15:42:23 compute-0 podman[241054]: 2026-01-06 15:42:23.858702811 +0000 UTC m=+0.120335765 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 06 15:42:29 compute-0 podman[201918]: time="2026-01-06T15:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:42:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:42:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3886 "" "Go-http-client/1.1"
Jan 06 15:42:29 compute-0 podman[241077]: 2026-01-06 15:42:29.85544534 +0000 UTC m=+0.093490404 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 15:42:29 compute-0 podman[241076]: 2026-01-06 15:42:29.876217953 +0000 UTC m=+0.125113187 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi)
Jan 06 15:42:31 compute-0 openstack_network_exporter[205258]: ERROR   15:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:42:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:42:31 compute-0 openstack_network_exporter[205258]: ERROR   15:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:42:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:42:32 compute-0 podman[241116]: 2026-01-06 15:42:32.84968329 +0000 UTC m=+0.104440176 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, com.redhat.component=ubi9-container, container_name=kepler, name=ubi9, config_id=kepler, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 06 15:42:36 compute-0 nova_compute[185513]: 2026-01-06 15:42:36.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:42:37 compute-0 nova_compute[185513]: 2026-01-06 15:42:37.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:42:38 compute-0 nova_compute[185513]: 2026-01-06 15:42:38.019 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:42:38 compute-0 nova_compute[185513]: 2026-01-06 15:42:38.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:42:38 compute-0 podman[241137]: 2026-01-06 15:42:38.839305354 +0000 UTC m=+0.098323897 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 06 15:42:38 compute-0 podman[241136]: 2026-01-06 15:42:38.939260465 +0000 UTC m=+0.187495660 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 06 15:42:40 compute-0 nova_compute[185513]: 2026-01-06 15:42:40.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:42:40 compute-0 nova_compute[185513]: 2026-01-06 15:42:40.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:42:40 compute-0 nova_compute[185513]: 2026-01-06 15:42:40.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:42:40 compute-0 nova_compute[185513]: 2026-01-06 15:42:40.061 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:42:41 compute-0 nova_compute[185513]: 2026-01-06 15:42:41.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:42:41 compute-0 nova_compute[185513]: 2026-01-06 15:42:41.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:42:41 compute-0 nova_compute[185513]: 2026-01-06 15:42:41.081 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:42:41 compute-0 nova_compute[185513]: 2026-01-06 15:42:41.082 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:42:41 compute-0 nova_compute[185513]: 2026-01-06 15:42:41.082 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:42:41 compute-0 nova_compute[185513]: 2026-01-06 15:42:41.083 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:42:41 compute-0 nova_compute[185513]: 2026-01-06 15:42:41.452 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:42:41 compute-0 nova_compute[185513]: 2026-01-06 15:42:41.454 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5697MB free_disk=72.47980117797852GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:42:41 compute-0 nova_compute[185513]: 2026-01-06 15:42:41.454 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:42:41 compute-0 nova_compute[185513]: 2026-01-06 15:42:41.454 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:42:41 compute-0 nova_compute[185513]: 2026-01-06 15:42:41.524 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:42:41 compute-0 nova_compute[185513]: 2026-01-06 15:42:41.524 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:42:41 compute-0 nova_compute[185513]: 2026-01-06 15:42:41.559 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:42:41 compute-0 nova_compute[185513]: 2026-01-06 15:42:41.576 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:42:41 compute-0 nova_compute[185513]: 2026-01-06 15:42:41.579 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:42:41 compute-0 nova_compute[185513]: 2026-01-06 15:42:41.579 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:42:42 compute-0 nova_compute[185513]: 2026-01-06 15:42:42.580 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:42:43 compute-0 nova_compute[185513]: 2026-01-06 15:42:43.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:42:43 compute-0 nova_compute[185513]: 2026-01-06 15:42:43.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:42:44 compute-0 nova_compute[185513]: 2026-01-06 15:42:44.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:42:50 compute-0 podman[241187]: 2026-01-06 15:42:50.852400413 +0000 UTC m=+0.111399638 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 06 15:42:52 compute-0 podman[241204]: 2026-01-06 15:42:52.844550314 +0000 UTC m=+0.102200935 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4)
Jan 06 15:42:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:42:53.684 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:42:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:42:53.685 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:42:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:42:53.685 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:42:54 compute-0 podman[241224]: 2026-01-06 15:42:54.852684455 +0000 UTC m=+0.123237096 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Jan 06 15:42:59 compute-0 podman[201918]: time="2026-01-06T15:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:42:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:42:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3884 "" "Go-http-client/1.1"
Jan 06 15:43:00 compute-0 podman[241245]: 2026-01-06 15:43:00.843118369 +0000 UTC m=+0.100071935 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 15:43:00 compute-0 podman[241244]: 2026-01-06 15:43:00.887936857 +0000 UTC m=+0.140877502 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 06 15:43:01 compute-0 openstack_network_exporter[205258]: ERROR   15:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:43:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:43:01 compute-0 openstack_network_exporter[205258]: ERROR   15:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:43:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:43:03 compute-0 podman[241286]: 2026-01-06 15:43:03.863783269 +0000 UTC m=+0.115042148 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.buildah.version=1.29.0, name=ubi9, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=kepler, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.4)
Jan 06 15:43:09 compute-0 podman[241308]: 2026-01-06 15:43:09.811931736 +0000 UTC m=+0.074310354 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 15:43:09 compute-0 podman[241307]: 2026-01-06 15:43:09.837937344 +0000 UTC m=+0.107524041 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 06 15:43:12 compute-0 sshd-session[241305]: Connection closed by authenticating user root 5.187.35.21 port 58082 [preauth]
Jan 06 15:43:15 compute-0 sshd-session[241356]: Connection closed by authenticating user root 5.187.35.21 port 13370 [preauth]
Jan 06 15:43:18 compute-0 sshd-session[241358]: Connection closed by authenticating user root 5.187.35.21 port 13384 [preauth]
Jan 06 15:43:21 compute-0 podman[241363]: 2026-01-06 15:43:21.864190188 +0000 UTC m=+0.123477212 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 06 15:43:22 compute-0 sshd-session[241360]: Connection closed by authenticating user root 5.187.35.21 port 13398 [preauth]
Jan 06 15:43:23 compute-0 sshd-session[241383]: Invalid user  from 165.245.134.205 port 57278
Jan 06 15:43:23 compute-0 podman[241386]: 2026-01-06 15:43:23.8173406 +0000 UTC m=+0.092839645 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 06 15:43:25 compute-0 sshd-session[241382]: Connection closed by authenticating user root 5.187.35.21 port 29050 [preauth]
Jan 06 15:43:25 compute-0 podman[241404]: 2026-01-06 15:43:25.865664701 +0000 UTC m=+0.118679548 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, architecture=x86_64, version=9.6, managed_by=edpm_ansible, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Jan 06 15:43:29 compute-0 sshd-session[241424]: Connection closed by authenticating user root 5.187.35.21 port 29076 [preauth]
Jan 06 15:43:29 compute-0 podman[201918]: time="2026-01-06T15:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:43:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:43:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3886 "" "Go-http-client/1.1"
Jan 06 15:43:31 compute-0 sshd-session[241383]: Connection closed by invalid user  165.245.134.205 port 57278 [preauth]
Jan 06 15:43:31 compute-0 openstack_network_exporter[205258]: ERROR   15:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:43:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:43:31 compute-0 openstack_network_exporter[205258]: ERROR   15:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:43:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:43:31 compute-0 podman[241428]: 2026-01-06 15:43:31.864964182 +0000 UTC m=+0.127565925 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 06 15:43:31 compute-0 podman[241429]: 2026-01-06 15:43:31.867778969 +0000 UTC m=+0.113450955 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 15:43:33 compute-0 nova_compute[185513]: 2026-01-06 15:43:33.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.070 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.072 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.077 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.077 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.077 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.078 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.082 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:43:33.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:43:33 compute-0 sshd-session[241426]: Connection closed by authenticating user root 5.187.35.21 port 29122 [preauth]
Jan 06 15:43:34 compute-0 podman[241473]: 2026-01-06 15:43:34.846929283 +0000 UTC m=+0.112515549 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, name=ubi9, release=1214.1726694543, vcs-type=git, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, container_name=kepler, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release-0.7.12=, config_id=kepler)
Jan 06 15:43:36 compute-0 sshd-session[241471]: Connection closed by authenticating user root 5.187.35.21 port 17622 [preauth]
Jan 06 15:43:37 compute-0 nova_compute[185513]: 2026-01-06 15:43:37.036 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:43:38 compute-0 nova_compute[185513]: 2026-01-06 15:43:38.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:43:38 compute-0 nova_compute[185513]: 2026-01-06 15:43:38.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:43:40 compute-0 nova_compute[185513]: 2026-01-06 15:43:40.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:43:40 compute-0 nova_compute[185513]: 2026-01-06 15:43:40.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:43:40 compute-0 nova_compute[185513]: 2026-01-06 15:43:40.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 06 15:43:40 compute-0 sshd-session[241493]: Connection closed by authenticating user root 5.187.35.21 port 17644 [preauth]
Jan 06 15:43:40 compute-0 podman[241497]: 2026-01-06 15:43:40.840252197 +0000 UTC m=+0.094443810 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 15:43:40 compute-0 podman[241496]: 2026-01-06 15:43:40.890753562 +0000 UTC m=+0.159283001 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.038 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.039 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.039 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.058 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.059 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.060 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.092 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.093 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.094 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.095 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.563 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.564 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5685MB free_disk=72.47930908203125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.565 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.565 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.675 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.676 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.730 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing inventories for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.792 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating ProviderTree inventory for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.792 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating inventory in ProviderTree for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.808 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing aggregate associations for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.828 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing trait associations for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE41,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.852 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.866 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.869 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:43:41 compute-0 nova_compute[185513]: 2026-01-06 15:43:41.870 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:43:43 compute-0 sshd-session[241495]: Connection closed by authenticating user root 5.187.35.21 port 17684 [preauth]
Jan 06 15:43:44 compute-0 nova_compute[185513]: 2026-01-06 15:43:44.835 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:43:44 compute-0 nova_compute[185513]: 2026-01-06 15:43:44.835 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:43:44 compute-0 nova_compute[185513]: 2026-01-06 15:43:44.836 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:43:47 compute-0 nova_compute[185513]: 2026-01-06 15:43:47.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:43:47 compute-0 nova_compute[185513]: 2026-01-06 15:43:47.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 06 15:43:47 compute-0 nova_compute[185513]: 2026-01-06 15:43:47.229 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 06 15:43:47 compute-0 sshd-session[241546]: Connection closed by authenticating user root 5.187.35.21 port 46494 [preauth]
Jan 06 15:43:50 compute-0 sshd-session[241548]: Connection closed by authenticating user root 5.187.35.21 port 46516 [preauth]
Jan 06 15:43:52 compute-0 podman[241553]: 2026-01-06 15:43:52.839943828 +0000 UTC m=+0.093850427 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 06 15:43:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:43:53.685 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:43:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:43:53.686 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:43:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:43:53.686 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:43:54 compute-0 sshd-session[241551]: Connection closed by authenticating user root 5.187.35.21 port 46526 [preauth]
Jan 06 15:43:54 compute-0 podman[241574]: 2026-01-06 15:43:54.847310748 +0000 UTC m=+0.101029558 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, io.buildah.version=1.41.4)
Jan 06 15:43:56 compute-0 podman[241594]: 2026-01-06 15:43:56.811353497 +0000 UTC m=+0.082273279 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_id=openstack_network_exporter, container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 06 15:43:57 compute-0 sshd-session[241572]: Connection closed by authenticating user root 5.187.35.21 port 34668 [preauth]
Jan 06 15:43:59 compute-0 podman[201918]: time="2026-01-06T15:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:43:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:43:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3882 "" "Go-http-client/1.1"
Jan 06 15:44:01 compute-0 sshd-session[241613]: Connection closed by authenticating user root 5.187.35.21 port 34680 [preauth]
Jan 06 15:44:01 compute-0 openstack_network_exporter[205258]: ERROR   15:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:44:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:44:01 compute-0 openstack_network_exporter[205258]: ERROR   15:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:44:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:44:02 compute-0 podman[241618]: 2026-01-06 15:44:02.845410156 +0000 UTC m=+0.086908293 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 15:44:02 compute-0 podman[241617]: 2026-01-06 15:44:02.891167263 +0000 UTC m=+0.135815734 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 06 15:44:04 compute-0 sshd-session[241615]: Connection closed by authenticating user root 5.187.35.21 port 19252 [preauth]
Jan 06 15:44:05 compute-0 podman[241663]: 2026-01-06 15:44:05.864329957 +0000 UTC m=+0.114441796 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, architecture=x86_64, config_id=kepler)
Jan 06 15:44:08 compute-0 sshd-session[241661]: Connection closed by authenticating user root 5.187.35.21 port 19264 [preauth]
Jan 06 15:44:11 compute-0 sshd-session[241683]: Connection closed by authenticating user root 5.187.35.21 port 19294 [preauth]
Jan 06 15:44:11 compute-0 podman[241687]: 2026-01-06 15:44:11.844878704 +0000 UTC m=+0.111934798 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 15:44:11 compute-0 podman[241686]: 2026-01-06 15:44:11.884015856 +0000 UTC m=+0.148190684 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 06 15:44:14 compute-0 sshd-session[241685]: Invalid user Antminer from 5.187.35.21 port 43098
Jan 06 15:44:14 compute-0 sshd-session[241685]: Connection closed by invalid user Antminer 5.187.35.21 port 43098 [preauth]
Jan 06 15:44:17 compute-0 sshd-session[241737]: Invalid user Antminer from 5.187.35.21 port 43118
Jan 06 15:44:18 compute-0 sshd-session[241737]: Connection closed by invalid user Antminer 5.187.35.21 port 43118 [preauth]
Jan 06 15:44:22 compute-0 sshd-session[241739]: Connection closed by authenticating user root 5.187.35.21 port 43124 [preauth]
Jan 06 15:44:23 compute-0 podman[241744]: 2026-01-06 15:44:23.837942308 +0000 UTC m=+0.093668462 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:44:24 compute-0 sshd-session[241742]: Connection closed by authenticating user root 5.187.35.21 port 25270 [preauth]
Jan 06 15:44:25 compute-0 podman[241763]: 2026-01-06 15:44:25.875037261 +0000 UTC m=+0.145042060 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Jan 06 15:44:27 compute-0 podman[241782]: 2026-01-06 15:44:27.873680379 +0000 UTC m=+0.129454675 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, version=9.6, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 15:44:27 compute-0 sshd-session[241761]: Connection closed by authenticating user root 5.187.35.21 port 25282 [preauth]
Jan 06 15:44:29 compute-0 podman[201918]: time="2026-01-06T15:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:44:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:44:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3883 "" "Go-http-client/1.1"
Jan 06 15:44:31 compute-0 openstack_network_exporter[205258]: ERROR   15:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:44:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:44:31 compute-0 openstack_network_exporter[205258]: ERROR   15:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:44:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:44:31 compute-0 sshd-session[241804]: Connection closed by authenticating user root 5.187.35.21 port 25292 [preauth]
Jan 06 15:44:33 compute-0 podman[241809]: 2026-01-06 15:44:33.822241464 +0000 UTC m=+0.073715921 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 15:44:33 compute-0 podman[241808]: 2026-01-06 15:44:33.842990736 +0000 UTC m=+0.101473499 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 06 15:44:34 compute-0 sshd-session[241806]: Connection closed by authenticating user root 5.187.35.21 port 40910 [preauth]
Jan 06 15:44:36 compute-0 podman[241853]: 2026-01-06 15:44:36.895043739 +0000 UTC m=+0.152362885 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, distribution-scope=public, version=9.4, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-container, architecture=x86_64, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.openshift.expose-services=, build-date=2024-09-18T21:23:30)
Jan 06 15:44:37 compute-0 sshd-session[241851]: Invalid user admin from 5.187.35.21 port 40920
Jan 06 15:44:37 compute-0 nova_compute[185513]: 2026-01-06 15:44:37.229 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:44:38 compute-0 nova_compute[185513]: 2026-01-06 15:44:38.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:44:38 compute-0 nova_compute[185513]: 2026-01-06 15:44:38.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:44:38 compute-0 sshd-session[241851]: Connection closed by invalid user admin 5.187.35.21 port 40920 [preauth]
Jan 06 15:44:41 compute-0 sshd-session[241873]: Invalid user baikal from 5.187.35.21 port 40934
Jan 06 15:44:41 compute-0 sshd-session[241873]: Connection closed by invalid user baikal 5.187.35.21 port 40934 [preauth]
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.039 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.040 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.040 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.041 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.074 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.075 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.075 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.075 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.619 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.620 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5702MB free_disk=72.48006057739258GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.621 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.622 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.690 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.690 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.717 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.735 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.737 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:44:42 compute-0 nova_compute[185513]: 2026-01-06 15:44:42.737 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:44:42 compute-0 podman[241876]: 2026-01-06 15:44:42.88067646 +0000 UTC m=+0.130946265 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 15:44:42 compute-0 podman[241875]: 2026-01-06 15:44:42.941650662 +0000 UTC m=+0.197564857 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 06 15:44:44 compute-0 nova_compute[185513]: 2026-01-06 15:44:44.721 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:44:44 compute-0 nova_compute[185513]: 2026-01-06 15:44:44.722 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:44:46 compute-0 nova_compute[185513]: 2026-01-06 15:44:46.020 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:44:46 compute-0 nova_compute[185513]: 2026-01-06 15:44:46.047 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:44:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:44:53.687 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:44:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:44:53.688 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:44:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:44:53.689 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:44:54 compute-0 podman[241921]: 2026-01-06 15:44:54.904483873 +0000 UTC m=+0.130642516 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 06 15:44:56 compute-0 podman[241940]: 2026-01-06 15:44:56.856270705 +0000 UTC m=+0.108379914 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:44:58 compute-0 podman[241961]: 2026-01-06 15:44:58.876341253 +0000 UTC m=+0.137695253 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 06 15:44:59 compute-0 podman[201918]: time="2026-01-06T15:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:44:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:44:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3888 "" "Go-http-client/1.1"
Jan 06 15:45:01 compute-0 openstack_network_exporter[205258]: ERROR   15:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:45:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:45:01 compute-0 openstack_network_exporter[205258]: ERROR   15:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:45:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:45:04 compute-0 podman[241981]: 2026-01-06 15:45:04.86171998 +0000 UTC m=+0.127657976 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 06 15:45:04 compute-0 podman[241982]: 2026-01-06 15:45:04.877250173 +0000 UTC m=+0.139271435 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 15:45:07 compute-0 podman[242023]: 2026-01-06 15:45:07.857499894 +0000 UTC m=+0.113747727 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, name=ubi9, container_name=kepler, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 06 15:45:13 compute-0 podman[242044]: 2026-01-06 15:45:13.856451629 +0000 UTC m=+0.115762581 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 06 15:45:13 compute-0 podman[242043]: 2026-01-06 15:45:13.886068277 +0000 UTC m=+0.152715204 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:45:25 compute-0 podman[242095]: 2026-01-06 15:45:25.852861171 +0000 UTC m=+0.111195899 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 06 15:45:27 compute-0 podman[242113]: 2026-01-06 15:45:27.868392699 +0000 UTC m=+0.135871566 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:45:29 compute-0 podman[201918]: time="2026-01-06T15:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:45:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:45:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3892 "" "Go-http-client/1.1"
Jan 06 15:45:29 compute-0 podman[242132]: 2026-01-06 15:45:29.865400335 +0000 UTC m=+0.125884990 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 06 15:45:31 compute-0 openstack_network_exporter[205258]: ERROR   15:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:45:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:45:31 compute-0 openstack_network_exporter[205258]: ERROR   15:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:45:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.072 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.074 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.084 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.086 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.088 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.089 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.090 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'power.state': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'power.state': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'power.state': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9d0efe840>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'power.state': [], 'disk.device.allocation': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.112 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.113 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.113 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.114 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:45:33.118 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:45:35 compute-0 podman[242155]: 2026-01-06 15:45:35.829206165 +0000 UTC m=+0.088369742 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 15:45:35 compute-0 podman[242154]: 2026-01-06 15:45:35.853160512 +0000 UTC m=+0.112181375 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 06 15:45:38 compute-0 nova_compute[185513]: 2026-01-06 15:45:38.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:45:38 compute-0 nova_compute[185513]: 2026-01-06 15:45:38.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:45:38 compute-0 podman[242199]: 2026-01-06 15:45:38.875296499 +0000 UTC m=+0.131716316 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.buildah.version=1.29.0, release=1214.1726694543, architecture=x86_64, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 15:45:39 compute-0 nova_compute[185513]: 2026-01-06 15:45:39.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:45:42 compute-0 nova_compute[185513]: 2026-01-06 15:45:42.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:45:42 compute-0 nova_compute[185513]: 2026-01-06 15:45:42.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:45:42 compute-0 nova_compute[185513]: 2026-01-06 15:45:42.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:45:42 compute-0 nova_compute[185513]: 2026-01-06 15:45:42.044 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:45:42 compute-0 nova_compute[185513]: 2026-01-06 15:45:42.044 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:45:43 compute-0 nova_compute[185513]: 2026-01-06 15:45:43.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:45:44 compute-0 nova_compute[185513]: 2026-01-06 15:45:44.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:45:44 compute-0 nova_compute[185513]: 2026-01-06 15:45:44.054 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:45:44 compute-0 nova_compute[185513]: 2026-01-06 15:45:44.055 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:45:44 compute-0 nova_compute[185513]: 2026-01-06 15:45:44.055 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:45:44 compute-0 nova_compute[185513]: 2026-01-06 15:45:44.055 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:45:44 compute-0 podman[242220]: 2026-01-06 15:45:44.188508783 +0000 UTC m=+0.116028528 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 15:45:44 compute-0 podman[242219]: 2026-01-06 15:45:44.257317763 +0000 UTC m=+0.179827155 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 06 15:45:44 compute-0 nova_compute[185513]: 2026-01-06 15:45:44.503 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:45:44 compute-0 nova_compute[185513]: 2026-01-06 15:45:44.505 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5691MB free_disk=72.4795913696289GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:45:44 compute-0 nova_compute[185513]: 2026-01-06 15:45:44.505 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:45:44 compute-0 nova_compute[185513]: 2026-01-06 15:45:44.506 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:45:44 compute-0 nova_compute[185513]: 2026-01-06 15:45:44.577 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:45:44 compute-0 nova_compute[185513]: 2026-01-06 15:45:44.577 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:45:44 compute-0 nova_compute[185513]: 2026-01-06 15:45:44.610 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:45:44 compute-0 nova_compute[185513]: 2026-01-06 15:45:44.628 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:45:44 compute-0 nova_compute[185513]: 2026-01-06 15:45:44.630 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:45:44 compute-0 nova_compute[185513]: 2026-01-06 15:45:44.630 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:45:46 compute-0 nova_compute[185513]: 2026-01-06 15:45:46.630 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:45:46 compute-0 nova_compute[185513]: 2026-01-06 15:45:46.630 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:45:47 compute-0 nova_compute[185513]: 2026-01-06 15:45:47.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:45:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:45:53.688 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:45:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:45:53.688 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:45:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:45:53.689 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:45:56 compute-0 podman[242270]: 2026-01-06 15:45:56.826049621 +0000 UTC m=+0.085865054 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Jan 06 15:45:58 compute-0 podman[242288]: 2026-01-06 15:45:58.87544693 +0000 UTC m=+0.139769319 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Jan 06 15:45:59 compute-0 podman[201918]: time="2026-01-06T15:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:45:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:45:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3889 "" "Go-http-client/1.1"
Jan 06 15:46:00 compute-0 podman[242308]: 2026-01-06 15:46:00.854929192 +0000 UTC m=+0.119317205 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, architecture=x86_64, managed_by=edpm_ansible, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Jan 06 15:46:01 compute-0 openstack_network_exporter[205258]: ERROR   15:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:46:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:46:01 compute-0 openstack_network_exporter[205258]: ERROR   15:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:46:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:46:06 compute-0 podman[242327]: 2026-01-06 15:46:06.826278294 +0000 UTC m=+0.096239297 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:46:06 compute-0 podman[242328]: 2026-01-06 15:46:06.897321145 +0000 UTC m=+0.145624407 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 15:46:09 compute-0 podman[242371]: 2026-01-06 15:46:09.834328645 +0000 UTC m=+0.099640026 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, name=ubi9, release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 06 15:46:14 compute-0 podman[242389]: 2026-01-06 15:46:14.849222773 +0000 UTC m=+0.129923545 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 06 15:46:14 compute-0 podman[242388]: 2026-01-06 15:46:14.911453292 +0000 UTC m=+0.194909236 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:46:27 compute-0 podman[242438]: 2026-01-06 15:46:27.873602507 +0000 UTC m=+0.134721470 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 06 15:46:29 compute-0 podman[201918]: time="2026-01-06T15:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:46:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:46:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3892 "" "Go-http-client/1.1"
Jan 06 15:46:29 compute-0 podman[242458]: 2026-01-06 15:46:29.840847593 +0000 UTC m=+0.105583123 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 06 15:46:31 compute-0 openstack_network_exporter[205258]: ERROR   15:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:46:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:46:31 compute-0 openstack_network_exporter[205258]: ERROR   15:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:46:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:46:31 compute-0 podman[242478]: 2026-01-06 15:46:31.845730722 +0000 UTC m=+0.105125561 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter)
Jan 06 15:46:37 compute-0 podman[242498]: 2026-01-06 15:46:37.866181015 +0000 UTC m=+0.118620396 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 06 15:46:37 compute-0 podman[242499]: 2026-01-06 15:46:37.888725669 +0000 UTC m=+0.131946227 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 06 15:46:38 compute-0 nova_compute[185513]: 2026-01-06 15:46:38.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:46:40 compute-0 nova_compute[185513]: 2026-01-06 15:46:40.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:46:40 compute-0 nova_compute[185513]: 2026-01-06 15:46:40.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:46:40 compute-0 podman[242541]: 2026-01-06 15:46:40.854751023 +0000 UTC m=+0.121364398 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, container_name=kepler, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, release=1214.1726694543, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_id=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 06 15:46:42 compute-0 nova_compute[185513]: 2026-01-06 15:46:42.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:46:44 compute-0 nova_compute[185513]: 2026-01-06 15:46:44.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:46:44 compute-0 nova_compute[185513]: 2026-01-06 15:46:44.022 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:46:44 compute-0 nova_compute[185513]: 2026-01-06 15:46:44.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:46:44 compute-0 nova_compute[185513]: 2026-01-06 15:46:44.038 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:46:44 compute-0 nova_compute[185513]: 2026-01-06 15:46:44.039 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:46:44 compute-0 nova_compute[185513]: 2026-01-06 15:46:44.039 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:46:44 compute-0 nova_compute[185513]: 2026-01-06 15:46:44.073 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:46:44 compute-0 nova_compute[185513]: 2026-01-06 15:46:44.074 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:46:44 compute-0 nova_compute[185513]: 2026-01-06 15:46:44.075 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:46:44 compute-0 nova_compute[185513]: 2026-01-06 15:46:44.075 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:46:44 compute-0 nova_compute[185513]: 2026-01-06 15:46:44.596 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:46:44 compute-0 nova_compute[185513]: 2026-01-06 15:46:44.599 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5710MB free_disk=72.4795913696289GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:46:44 compute-0 nova_compute[185513]: 2026-01-06 15:46:44.600 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:46:44 compute-0 nova_compute[185513]: 2026-01-06 15:46:44.600 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:46:44 compute-0 nova_compute[185513]: 2026-01-06 15:46:44.702 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:46:44 compute-0 nova_compute[185513]: 2026-01-06 15:46:44.703 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:46:44 compute-0 nova_compute[185513]: 2026-01-06 15:46:44.742 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:46:44 compute-0 nova_compute[185513]: 2026-01-06 15:46:44.758 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:46:44 compute-0 nova_compute[185513]: 2026-01-06 15:46:44.761 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:46:44 compute-0 nova_compute[185513]: 2026-01-06 15:46:44.762 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:46:45 compute-0 podman[242561]: 2026-01-06 15:46:45.89235599 +0000 UTC m=+0.138033248 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 15:46:45 compute-0 podman[242560]: 2026-01-06 15:46:45.975352727 +0000 UTC m=+0.230510864 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:46:46 compute-0 nova_compute[185513]: 2026-01-06 15:46:46.746 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:46:46 compute-0 nova_compute[185513]: 2026-01-06 15:46:46.747 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:46:49 compute-0 nova_compute[185513]: 2026-01-06 15:46:49.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:46:51 compute-0 nova_compute[185513]: 2026-01-06 15:46:51.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:46:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:46:53.690 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:46:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:46:53.691 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:46:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:46:53.692 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:46:58 compute-0 podman[242608]: 2026-01-06 15:46:58.838988318 +0000 UTC m=+0.096344500 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 06 15:46:59 compute-0 podman[201918]: time="2026-01-06T15:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:46:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:46:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3895 "" "Go-http-client/1.1"
Jan 06 15:47:00 compute-0 podman[242626]: 2026-01-06 15:47:00.865597016 +0000 UTC m=+0.127483649 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 06 15:47:01 compute-0 openstack_network_exporter[205258]: ERROR   15:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:47:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:47:01 compute-0 openstack_network_exporter[205258]: ERROR   15:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:47:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:47:02 compute-0 podman[242645]: 2026-01-06 15:47:02.889393643 +0000 UTC m=+0.146942312 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., distribution-scope=public, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, release=1755695350, maintainer=Red Hat, Inc., config_id=openstack_network_exporter)
Jan 06 15:47:08 compute-0 podman[242667]: 2026-01-06 15:47:08.832238472 +0000 UTC m=+0.086662424 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 06 15:47:08 compute-0 podman[242666]: 2026-01-06 15:47:08.84547376 +0000 UTC m=+0.109004772 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 06 15:47:11 compute-0 podman[242708]: 2026-01-06 15:47:11.877698818 +0000 UTC m=+0.136982110 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, version=9.4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=kepler)
Jan 06 15:47:16 compute-0 podman[242727]: 2026-01-06 15:47:16.896732228 +0000 UTC m=+0.149874809 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 15:47:16 compute-0 podman[242726]: 2026-01-06 15:47:16.910989684 +0000 UTC m=+0.164228648 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 06 15:47:29 compute-0 podman[201918]: time="2026-01-06T15:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:47:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:47:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3890 "" "Go-http-client/1.1"
Jan 06 15:47:29 compute-0 podman[242777]: 2026-01-06 15:47:29.89752198 +0000 UTC m=+0.155903249 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 06 15:47:31 compute-0 openstack_network_exporter[205258]: ERROR   15:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:47:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:47:31 compute-0 openstack_network_exporter[205258]: ERROR   15:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:47:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:47:31 compute-0 podman[242796]: 2026-01-06 15:47:31.857848134 +0000 UTC m=+0.119462649 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.075 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.077 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.077 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.084 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.089 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.089 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.090 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.090 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.091 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.092 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.092 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:47:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:47:33 compute-0 podman[242817]: 2026-01-06 15:47:33.850686425 +0000 UTC m=+0.107919354 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, config_id=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container)
Jan 06 15:47:39 compute-0 nova_compute[185513]: 2026-01-06 15:47:39.034 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:47:39 compute-0 podman[242840]: 2026-01-06 15:47:39.824382258 +0000 UTC m=+0.082181797 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 06 15:47:39 compute-0 podman[242839]: 2026-01-06 15:47:39.830844238 +0000 UTC m=+0.102375089 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202)
Jan 06 15:47:40 compute-0 nova_compute[185513]: 2026-01-06 15:47:40.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:47:41 compute-0 nova_compute[185513]: 2026-01-06 15:47:41.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:47:42 compute-0 podman[242880]: 2026-01-06 15:47:42.842884673 +0000 UTC m=+0.105360067 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.expose-services=, name=ubi9, release=1214.1726694543, config_id=kepler, distribution-scope=public, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 15:47:44 compute-0 nova_compute[185513]: 2026-01-06 15:47:44.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.062 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.063 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.064 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.064 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.065 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.099 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.100 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.101 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.101 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.531 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.533 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5700MB free_disk=72.4795913696289GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.533 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.533 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.630 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.631 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.662 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.679 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.681 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:47:46 compute-0 nova_compute[185513]: 2026-01-06 15:47:46.682 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:47:47 compute-0 podman[242901]: 2026-01-06 15:47:47.844285146 +0000 UTC m=+0.093674938 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 15:47:47 compute-0 podman[242900]: 2026-01-06 15:47:47.860601226 +0000 UTC m=+0.119698584 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true)
Jan 06 15:47:49 compute-0 nova_compute[185513]: 2026-01-06 15:47:49.642 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:47:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:47:53.691 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:47:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:47:53.692 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:47:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:47:53.692 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:47:59 compute-0 podman[201918]: time="2026-01-06T15:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:47:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:47:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3891 "" "Go-http-client/1.1"
Jan 06 15:48:00 compute-0 podman[242949]: 2026-01-06 15:48:00.845646147 +0000 UTC m=+0.103053076 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 06 15:48:01 compute-0 openstack_network_exporter[205258]: ERROR   15:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:48:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:48:01 compute-0 openstack_network_exporter[205258]: ERROR   15:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:48:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:48:02 compute-0 podman[242968]: 2026-01-06 15:48:02.883856 +0000 UTC m=+0.136140159 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 06 15:48:04 compute-0 podman[242987]: 2026-01-06 15:48:04.871626557 +0000 UTC m=+0.126517355 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, distribution-scope=public, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 06 15:48:10 compute-0 podman[243008]: 2026-01-06 15:48:10.876583123 +0000 UTC m=+0.125990401 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi)
Jan 06 15:48:10 compute-0 podman[243009]: 2026-01-06 15:48:10.896695833 +0000 UTC m=+0.140330749 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 15:48:14 compute-0 podman[243046]: 2026-01-06 15:48:14.213731005 +0000 UTC m=+0.478411877 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.buildah.version=1.29.0, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc.)
Jan 06 15:48:18 compute-0 podman[243066]: 2026-01-06 15:48:18.843694032 +0000 UTC m=+0.094894722 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 06 15:48:18 compute-0 podman[243065]: 2026-01-06 15:48:18.913968883 +0000 UTC m=+0.171268614 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:48:29 compute-0 podman[201918]: time="2026-01-06T15:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:48:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:48:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3890 "" "Go-http-client/1.1"
Jan 06 15:48:31 compute-0 openstack_network_exporter[205258]: ERROR   15:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:48:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:48:31 compute-0 openstack_network_exporter[205258]: ERROR   15:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:48:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:48:31 compute-0 podman[243117]: 2026-01-06 15:48:31.875787738 +0000 UTC m=+0.132820951 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 06 15:48:33 compute-0 podman[243134]: 2026-01-06 15:48:33.837905138 +0000 UTC m=+0.105598833 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 06 15:48:35 compute-0 podman[243154]: 2026-01-06 15:48:35.851394573 +0000 UTC m=+0.110925014 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 15:48:40 compute-0 nova_compute[185513]: 2026-01-06 15:48:40.019 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:48:40 compute-0 nova_compute[185513]: 2026-01-06 15:48:40.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:48:41 compute-0 nova_compute[185513]: 2026-01-06 15:48:41.037 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:48:41 compute-0 nova_compute[185513]: 2026-01-06 15:48:41.038 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:48:41 compute-0 podman[243176]: 2026-01-06 15:48:41.877000914 +0000 UTC m=+0.124388928 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 15:48:41 compute-0 podman[243175]: 2026-01-06 15:48:41.908633227 +0000 UTC m=+0.165966963 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 06 15:48:44 compute-0 nova_compute[185513]: 2026-01-06 15:48:44.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:48:44 compute-0 nova_compute[185513]: 2026-01-06 15:48:44.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:48:44 compute-0 nova_compute[185513]: 2026-01-06 15:48:44.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 06 15:48:44 compute-0 podman[243216]: 2026-01-06 15:48:44.830486197 +0000 UTC m=+0.126995467 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.openshift.expose-services=, config_id=kepler, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, name=ubi9, container_name=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., version=9.4)
Jan 06 15:48:46 compute-0 nova_compute[185513]: 2026-01-06 15:48:46.040 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:48:46 compute-0 nova_compute[185513]: 2026-01-06 15:48:46.041 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:48:46 compute-0 nova_compute[185513]: 2026-01-06 15:48:46.041 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:48:46 compute-0 nova_compute[185513]: 2026-01-06 15:48:46.060 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:48:46 compute-0 nova_compute[185513]: 2026-01-06 15:48:46.062 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:48:46 compute-0 nova_compute[185513]: 2026-01-06 15:48:46.063 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:48:46 compute-0 nova_compute[185513]: 2026-01-06 15:48:46.063 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:48:46 compute-0 nova_compute[185513]: 2026-01-06 15:48:46.108 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:48:46 compute-0 nova_compute[185513]: 2026-01-06 15:48:46.109 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:48:46 compute-0 nova_compute[185513]: 2026-01-06 15:48:46.110 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:48:46 compute-0 nova_compute[185513]: 2026-01-06 15:48:46.110 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:48:46 compute-0 nova_compute[185513]: 2026-01-06 15:48:46.661 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:48:46 compute-0 nova_compute[185513]: 2026-01-06 15:48:46.663 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5698MB free_disk=72.47957229614258GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:48:46 compute-0 nova_compute[185513]: 2026-01-06 15:48:46.663 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:48:46 compute-0 nova_compute[185513]: 2026-01-06 15:48:46.664 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:48:46 compute-0 nova_compute[185513]: 2026-01-06 15:48:46.964 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:48:46 compute-0 nova_compute[185513]: 2026-01-06 15:48:46.965 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:48:47 compute-0 nova_compute[185513]: 2026-01-06 15:48:47.040 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing inventories for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 06 15:48:47 compute-0 nova_compute[185513]: 2026-01-06 15:48:47.142 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating ProviderTree inventory for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 06 15:48:47 compute-0 nova_compute[185513]: 2026-01-06 15:48:47.143 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating inventory in ProviderTree for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 06 15:48:47 compute-0 nova_compute[185513]: 2026-01-06 15:48:47.166 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing aggregate associations for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 06 15:48:47 compute-0 nova_compute[185513]: 2026-01-06 15:48:47.219 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing trait associations for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE41,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 06 15:48:47 compute-0 nova_compute[185513]: 2026-01-06 15:48:47.246 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:48:47 compute-0 nova_compute[185513]: 2026-01-06 15:48:47.264 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:48:47 compute-0 nova_compute[185513]: 2026-01-06 15:48:47.265 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:48:47 compute-0 nova_compute[185513]: 2026-01-06 15:48:47.266 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:48:48 compute-0 nova_compute[185513]: 2026-01-06 15:48:48.226 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:48:49 compute-0 podman[243238]: 2026-01-06 15:48:49.859349074 +0000 UTC m=+0.111451338 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 06 15:48:49 compute-0 podman[243237]: 2026-01-06 15:48:49.915741961 +0000 UTC m=+0.178173076 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 06 15:48:50 compute-0 nova_compute[185513]: 2026-01-06 15:48:50.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:48:52 compute-0 nova_compute[185513]: 2026-01-06 15:48:52.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:48:52 compute-0 nova_compute[185513]: 2026-01-06 15:48:52.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 06 15:48:52 compute-0 nova_compute[185513]: 2026-01-06 15:48:52.043 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 06 15:48:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:48:53.692 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:48:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:48:53.693 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:48:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:48:53.694 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:48:56 compute-0 nova_compute[185513]: 2026-01-06 15:48:56.038 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:48:59 compute-0 podman[201918]: time="2026-01-06T15:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:48:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:48:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3892 "" "Go-http-client/1.1"
Jan 06 15:49:01 compute-0 openstack_network_exporter[205258]: ERROR   15:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:49:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:49:01 compute-0 openstack_network_exporter[205258]: ERROR   15:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:49:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:49:02 compute-0 podman[243284]: 2026-01-06 15:49:02.822671471 +0000 UTC m=+0.090008652 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 06 15:49:04 compute-0 podman[243304]: 2026-01-06 15:49:04.885786922 +0000 UTC m=+0.136389634 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, io.buildah.version=1.41.4, tcib_managed=true)
Jan 06 15:49:06 compute-0 podman[243325]: 2026-01-06 15:49:06.853408707 +0000 UTC m=+0.115092774 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc.)
Jan 06 15:49:12 compute-0 podman[243346]: 2026-01-06 15:49:12.865299217 +0000 UTC m=+0.112599617 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 15:49:12 compute-0 podman[243345]: 2026-01-06 15:49:12.876969365 +0000 UTC m=+0.133371825 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Jan 06 15:49:15 compute-0 podman[243387]: 2026-01-06 15:49:15.864250158 +0000 UTC m=+0.119875829 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, release=1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=kepler, vendor=Red Hat, Inc., name=ubi9, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 06 15:49:20 compute-0 nova_compute[185513]: 2026-01-06 15:49:20.649 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:49:20 compute-0 podman[243408]: 2026-01-06 15:49:20.903713374 +0000 UTC m=+0.150071814 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 06 15:49:20 compute-0 podman[243407]: 2026-01-06 15:49:20.938424479 +0000 UTC m=+0.189616507 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:49:29 compute-0 podman[201918]: time="2026-01-06T15:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:49:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:49:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3894 "" "Go-http-client/1.1"
Jan 06 15:49:31 compute-0 openstack_network_exporter[205258]: ERROR   15:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:49:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:49:31 compute-0 openstack_network_exporter[205258]: ERROR   15:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:49:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.075 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.076 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.077 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.084 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.084 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.084 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.084 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.082 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.086 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.088 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.090 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:49:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:49:33 compute-0 podman[243456]: 2026-01-06 15:49:33.848761928 +0000 UTC m=+0.111683783 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:49:35 compute-0 podman[243476]: 2026-01-06 15:49:35.843600931 +0000 UTC m=+0.104391361 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.4)
Jan 06 15:49:37 compute-0 podman[243496]: 2026-01-06 15:49:37.892828927 +0000 UTC m=+0.152873690 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 15:49:40 compute-0 nova_compute[185513]: 2026-01-06 15:49:40.048 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:49:41 compute-0 nova_compute[185513]: 2026-01-06 15:49:41.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:49:43 compute-0 nova_compute[185513]: 2026-01-06 15:49:43.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:49:43 compute-0 podman[243517]: 2026-01-06 15:49:43.889106963 +0000 UTC m=+0.139609600 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 06 15:49:43 compute-0 podman[243516]: 2026-01-06 15:49:43.919078223 +0000 UTC m=+0.176752649 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:49:44 compute-0 nova_compute[185513]: 2026-01-06 15:49:44.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:49:46 compute-0 nova_compute[185513]: 2026-01-06 15:49:46.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:49:46 compute-0 nova_compute[185513]: 2026-01-06 15:49:46.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:49:46 compute-0 nova_compute[185513]: 2026-01-06 15:49:46.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:49:46 compute-0 nova_compute[185513]: 2026-01-06 15:49:46.068 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:49:46 compute-0 nova_compute[185513]: 2026-01-06 15:49:46.069 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:49:46 compute-0 nova_compute[185513]: 2026-01-06 15:49:46.069 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:49:46 compute-0 nova_compute[185513]: 2026-01-06 15:49:46.070 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:49:46 compute-0 nova_compute[185513]: 2026-01-06 15:49:46.591 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:49:46 compute-0 nova_compute[185513]: 2026-01-06 15:49:46.594 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5686MB free_disk=72.48004150390625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:49:46 compute-0 nova_compute[185513]: 2026-01-06 15:49:46.594 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:49:46 compute-0 nova_compute[185513]: 2026-01-06 15:49:46.595 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:49:46 compute-0 nova_compute[185513]: 2026-01-06 15:49:46.685 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:49:46 compute-0 nova_compute[185513]: 2026-01-06 15:49:46.686 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:49:46 compute-0 nova_compute[185513]: 2026-01-06 15:49:46.729 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:49:46 compute-0 nova_compute[185513]: 2026-01-06 15:49:46.749 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:49:46 compute-0 nova_compute[185513]: 2026-01-06 15:49:46.751 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:49:46 compute-0 nova_compute[185513]: 2026-01-06 15:49:46.751 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:49:46 compute-0 podman[243558]: 2026-01-06 15:49:46.869863984 +0000 UTC m=+0.131096095 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, release-0.7.12=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, managed_by=edpm_ansible, version=9.4, io.openshift.tags=base rhel9, release=1214.1726694543)
Jan 06 15:49:48 compute-0 nova_compute[185513]: 2026-01-06 15:49:48.752 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:49:48 compute-0 nova_compute[185513]: 2026-01-06 15:49:48.753 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:49:48 compute-0 nova_compute[185513]: 2026-01-06 15:49:48.753 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:49:48 compute-0 nova_compute[185513]: 2026-01-06 15:49:48.779 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:49:48 compute-0 nova_compute[185513]: 2026-01-06 15:49:48.780 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:49:51 compute-0 nova_compute[185513]: 2026-01-06 15:49:51.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:49:51 compute-0 podman[243578]: 2026-01-06 15:49:51.893702369 +0000 UTC m=+0.138366676 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 15:49:51 compute-0 podman[243577]: 2026-01-06 15:49:51.939295131 +0000 UTC m=+0.190585003 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 06 15:49:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:49:53.694 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:49:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:49:53.695 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:49:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:49:53.695 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:49:59 compute-0 podman[201918]: time="2026-01-06T15:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:49:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:49:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3896 "" "Go-http-client/1.1"
Jan 06 15:50:01 compute-0 openstack_network_exporter[205258]: ERROR   15:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:50:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:50:01 compute-0 openstack_network_exporter[205258]: ERROR   15:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:50:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:50:01 compute-0 anacron[30942]: Job `cron.daily' started
Jan 06 15:50:01 compute-0 anacron[30942]: Job `cron.daily' terminated
Jan 06 15:50:04 compute-0 podman[243629]: 2026-01-06 15:50:04.848867291 +0000 UTC m=+0.111450508 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 06 15:50:06 compute-0 podman[243647]: 2026-01-06 15:50:06.870791076 +0000 UTC m=+0.130371546 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224)
Jan 06 15:50:08 compute-0 podman[243666]: 2026-01-06 15:50:08.850678466 +0000 UTC m=+0.115150465 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 06 15:50:14 compute-0 podman[243689]: 2026-01-06 15:50:14.823190766 +0000 UTC m=+0.105179432 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 15:50:14 compute-0 podman[243688]: 2026-01-06 15:50:14.837492773 +0000 UTC m=+0.126156864 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 06 15:50:17 compute-0 podman[243729]: 2026-01-06 15:50:17.878230225 +0000 UTC m=+0.126427943 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release-0.7.12=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, managed_by=edpm_ansible, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 06 15:50:22 compute-0 podman[243750]: 2026-01-06 15:50:22.836476552 +0000 UTC m=+0.096234097 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 15:50:22 compute-0 podman[243749]: 2026-01-06 15:50:22.944792647 +0000 UTC m=+0.201545690 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 06 15:50:29 compute-0 podman[201918]: time="2026-01-06T15:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:50:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:50:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3894 "" "Go-http-client/1.1"
Jan 06 15:50:31 compute-0 openstack_network_exporter[205258]: ERROR   15:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:50:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:50:31 compute-0 openstack_network_exporter[205258]: ERROR   15:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:50:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:50:35 compute-0 podman[243797]: 2026-01-06 15:50:35.85611894 +0000 UTC m=+0.104392389 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 06 15:50:37 compute-0 podman[243815]: 2026-01-06 15:50:37.874083737 +0000 UTC m=+0.131203521 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Jan 06 15:50:39 compute-0 podman[243834]: 2026-01-06 15:50:39.900268737 +0000 UTC m=+0.157238961 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, architecture=x86_64, io.buildah.version=1.33.7, config_id=openstack_network_exporter, name=ubi9-minimal, maintainer=Red Hat, Inc.)
Jan 06 15:50:42 compute-0 nova_compute[185513]: 2026-01-06 15:50:42.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:50:42 compute-0 nova_compute[185513]: 2026-01-06 15:50:42.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:50:44 compute-0 nova_compute[185513]: 2026-01-06 15:50:44.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:50:45 compute-0 podman[243854]: 2026-01-06 15:50:45.854014025 +0000 UTC m=+0.099365462 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 06 15:50:45 compute-0 podman[243853]: 2026-01-06 15:50:45.895420255 +0000 UTC m=+0.148725069 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:50:46 compute-0 nova_compute[185513]: 2026-01-06 15:50:46.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:50:47 compute-0 nova_compute[185513]: 2026-01-06 15:50:47.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:50:47 compute-0 nova_compute[185513]: 2026-01-06 15:50:47.057 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:50:47 compute-0 nova_compute[185513]: 2026-01-06 15:50:47.058 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:50:47 compute-0 nova_compute[185513]: 2026-01-06 15:50:47.059 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:50:47 compute-0 nova_compute[185513]: 2026-01-06 15:50:47.059 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:50:47 compute-0 nova_compute[185513]: 2026-01-06 15:50:47.560 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:50:47 compute-0 nova_compute[185513]: 2026-01-06 15:50:47.562 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5691MB free_disk=72.47981262207031GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:50:47 compute-0 nova_compute[185513]: 2026-01-06 15:50:47.563 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:50:47 compute-0 nova_compute[185513]: 2026-01-06 15:50:47.564 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:50:47 compute-0 nova_compute[185513]: 2026-01-06 15:50:47.651 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:50:47 compute-0 nova_compute[185513]: 2026-01-06 15:50:47.652 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:50:47 compute-0 nova_compute[185513]: 2026-01-06 15:50:47.684 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:50:47 compute-0 nova_compute[185513]: 2026-01-06 15:50:47.713 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:50:47 compute-0 nova_compute[185513]: 2026-01-06 15:50:47.715 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:50:47 compute-0 nova_compute[185513]: 2026-01-06 15:50:47.716 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:50:48 compute-0 nova_compute[185513]: 2026-01-06 15:50:48.717 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:50:48 compute-0 nova_compute[185513]: 2026-01-06 15:50:48.718 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:50:48 compute-0 nova_compute[185513]: 2026-01-06 15:50:48.719 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:50:48 compute-0 podman[243890]: 2026-01-06 15:50:48.854383345 +0000 UTC m=+0.114096224 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, vcs-type=git, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, build-date=2024-09-18T21:23:30, version=9.4, container_name=kepler, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container)
Jan 06 15:50:49 compute-0 nova_compute[185513]: 2026-01-06 15:50:49.026 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:50:49 compute-0 nova_compute[185513]: 2026-01-06 15:50:49.027 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:50:49 compute-0 nova_compute[185513]: 2026-01-06 15:50:49.028 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:50:49 compute-0 nova_compute[185513]: 2026-01-06 15:50:49.054 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:50:53 compute-0 nova_compute[185513]: 2026-01-06 15:50:53.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:50:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:50:53.696 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:50:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:50:53.698 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:50:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:50:53.698 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:50:53 compute-0 podman[243911]: 2026-01-06 15:50:53.867640412 +0000 UTC m=+0.114972518 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 15:50:53 compute-0 podman[243910]: 2026-01-06 15:50:53.922961572 +0000 UTC m=+0.185217335 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:50:58 compute-0 nova_compute[185513]: 2026-01-06 15:50:58.019 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:50:59 compute-0 podman[201918]: time="2026-01-06T15:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:50:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:50:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3886 "" "Go-http-client/1.1"
Jan 06 15:51:01 compute-0 openstack_network_exporter[205258]: ERROR   15:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:51:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:51:01 compute-0 openstack_network_exporter[205258]: ERROR   15:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:51:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:51:06 compute-0 podman[243959]: 2026-01-06 15:51:06.86522493 +0000 UTC m=+0.130642455 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:51:08 compute-0 podman[243978]: 2026-01-06 15:51:08.866415289 +0000 UTC m=+0.123360098 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251224, config_id=ceilometer_agent_compute)
Jan 06 15:51:10 compute-0 podman[243997]: 2026-01-06 15:51:10.84345918 +0000 UTC m=+0.104617296 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, version=9.6, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter)
Jan 06 15:51:16 compute-0 podman[244019]: 2026-01-06 15:51:16.826898767 +0000 UTC m=+0.080651422 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 06 15:51:16 compute-0 podman[244018]: 2026-01-06 15:51:16.857582844 +0000 UTC m=+0.119618665 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Jan 06 15:51:19 compute-0 podman[244063]: 2026-01-06 15:51:19.806600303 +0000 UTC m=+0.076696534 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=base rhel9, distribution-scope=public, maintainer=Red Hat, Inc., config_id=kepler, io.buildah.version=1.29.0, name=ubi9, vendor=Red Hat, Inc., release=1214.1726694543, managed_by=edpm_ansible, vcs-type=git, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 15:51:24 compute-0 podman[244084]: 2026-01-06 15:51:24.836787013 +0000 UTC m=+0.090814358 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 15:51:24 compute-0 podman[244083]: 2026-01-06 15:51:24.914385281 +0000 UTC m=+0.162714261 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 06 15:51:29 compute-0 podman[201918]: time="2026-01-06T15:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:51:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:51:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3901 "" "Go-http-client/1.1"
Jan 06 15:51:31 compute-0 openstack_network_exporter[205258]: ERROR   15:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:51:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:51:31 compute-0 openstack_network_exporter[205258]: ERROR   15:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:51:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.075 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.076 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.077 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.084 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.088 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.089 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.089 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.090 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.090 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.090 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.110 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:51:33.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:51:37 compute-0 podman[244133]: 2026-01-06 15:51:37.821286103 +0000 UTC m=+0.087614571 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:51:39 compute-0 podman[244151]: 2026-01-06 15:51:39.858055913 +0000 UTC m=+0.119833090 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251224)
Jan 06 15:51:41 compute-0 podman[244170]: 2026-01-06 15:51:41.855007387 +0000 UTC m=+0.119947014 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=openstack_network_exporter, version=9.6, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible)
Jan 06 15:51:42 compute-0 nova_compute[185513]: 2026-01-06 15:51:42.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:51:42 compute-0 nova_compute[185513]: 2026-01-06 15:51:42.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:51:46 compute-0 nova_compute[185513]: 2026-01-06 15:51:46.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:51:46 compute-0 nova_compute[185513]: 2026-01-06 15:51:46.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:51:47 compute-0 podman[244192]: 2026-01-06 15:51:47.859307843 +0000 UTC m=+0.110100665 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 15:51:47 compute-0 podman[244191]: 2026-01-06 15:51:47.879218346 +0000 UTC m=+0.136515486 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 06 15:51:48 compute-0 nova_compute[185513]: 2026-01-06 15:51:48.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:51:49 compute-0 nova_compute[185513]: 2026-01-06 15:51:49.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:51:49 compute-0 nova_compute[185513]: 2026-01-06 15:51:49.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:51:49 compute-0 nova_compute[185513]: 2026-01-06 15:51:49.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:51:49 compute-0 nova_compute[185513]: 2026-01-06 15:51:49.074 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:51:49 compute-0 nova_compute[185513]: 2026-01-06 15:51:49.075 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:51:49 compute-0 nova_compute[185513]: 2026-01-06 15:51:49.075 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:51:49 compute-0 nova_compute[185513]: 2026-01-06 15:51:49.076 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:51:49 compute-0 nova_compute[185513]: 2026-01-06 15:51:49.584 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:51:49 compute-0 nova_compute[185513]: 2026-01-06 15:51:49.586 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5689MB free_disk=72.47979354858398GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:51:49 compute-0 nova_compute[185513]: 2026-01-06 15:51:49.587 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:51:49 compute-0 nova_compute[185513]: 2026-01-06 15:51:49.587 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:51:49 compute-0 nova_compute[185513]: 2026-01-06 15:51:49.691 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:51:49 compute-0 nova_compute[185513]: 2026-01-06 15:51:49.691 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:51:49 compute-0 nova_compute[185513]: 2026-01-06 15:51:49.745 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:51:49 compute-0 nova_compute[185513]: 2026-01-06 15:51:49.763 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:51:49 compute-0 nova_compute[185513]: 2026-01-06 15:51:49.766 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:51:49 compute-0 nova_compute[185513]: 2026-01-06 15:51:49.766 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:51:50 compute-0 podman[244233]: 2026-01-06 15:51:50.888258382 +0000 UTC m=+0.144838134 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4)
Jan 06 15:51:51 compute-0 nova_compute[185513]: 2026-01-06 15:51:51.766 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:51:51 compute-0 nova_compute[185513]: 2026-01-06 15:51:51.767 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:51:51 compute-0 nova_compute[185513]: 2026-01-06 15:51:51.767 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:51:51 compute-0 nova_compute[185513]: 2026-01-06 15:51:51.805 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:51:53 compute-0 nova_compute[185513]: 2026-01-06 15:51:53.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:51:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:51:53.698 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:51:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:51:53.699 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:51:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:51:53.699 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:51:55 compute-0 podman[244252]: 2026-01-06 15:51:55.879840129 +0000 UTC m=+0.137815161 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 06 15:51:55 compute-0 podman[244253]: 2026-01-06 15:51:55.8817081 +0000 UTC m=+0.128411525 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 06 15:51:59 compute-0 podman[201918]: time="2026-01-06T15:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:51:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:51:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3900 "" "Go-http-client/1.1"
Jan 06 15:52:01 compute-0 openstack_network_exporter[205258]: ERROR   15:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:52:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:52:01 compute-0 openstack_network_exporter[205258]: ERROR   15:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:52:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:52:08 compute-0 podman[244302]: 2026-01-06 15:52:08.826233368 +0000 UTC m=+0.091223890 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 06 15:52:10 compute-0 podman[244320]: 2026-01-06 15:52:10.855606786 +0000 UTC m=+0.119853791 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true)
Jan 06 15:52:12 compute-0 podman[244339]: 2026-01-06 15:52:12.86065631 +0000 UTC m=+0.116553171 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.openshift.expose-services=, version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, vcs-type=git, architecture=x86_64)
Jan 06 15:52:18 compute-0 podman[244358]: 2026-01-06 15:52:18.827309651 +0000 UTC m=+0.091597732 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Jan 06 15:52:18 compute-0 podman[244359]: 2026-01-06 15:52:18.896513389 +0000 UTC m=+0.144756392 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 06 15:52:21 compute-0 podman[244398]: 2026-01-06 15:52:21.898785231 +0000 UTC m=+0.149353346 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, config_id=kepler, name=ubi9, vendor=Red Hat, Inc., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.tags=base rhel9)
Jan 06 15:52:26 compute-0 podman[244418]: 2026-01-06 15:52:26.836539079 +0000 UTC m=+0.084517868 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 15:52:26 compute-0 podman[244417]: 2026-01-06 15:52:26.932680192 +0000 UTC m=+0.178416929 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 06 15:52:29 compute-0 podman[201918]: time="2026-01-06T15:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:52:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:52:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3892 "" "Go-http-client/1.1"
Jan 06 15:52:31 compute-0 openstack_network_exporter[205258]: ERROR   15:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:52:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:52:31 compute-0 openstack_network_exporter[205258]: ERROR   15:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:52:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:52:39 compute-0 podman[244465]: 2026-01-06 15:52:39.893982439 +0000 UTC m=+0.150161468 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:52:41 compute-0 podman[244483]: 2026-01-06 15:52:41.890722236 +0000 UTC m=+0.149004157 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, managed_by=edpm_ansible, org.label-schema.build-date=20251224, tcib_managed=true, io.buildah.version=1.41.4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 06 15:52:42 compute-0 nova_compute[185513]: 2026-01-06 15:52:42.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:52:42 compute-0 nova_compute[185513]: 2026-01-06 15:52:42.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:52:43 compute-0 podman[244502]: 2026-01-06 15:52:43.87300451 +0000 UTC m=+0.123950553 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1755695350, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, version=9.6, config_id=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc.)
Jan 06 15:52:46 compute-0 nova_compute[185513]: 2026-01-06 15:52:46.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:52:47 compute-0 nova_compute[185513]: 2026-01-06 15:52:47.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:52:49 compute-0 nova_compute[185513]: 2026-01-06 15:52:49.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:52:49 compute-0 nova_compute[185513]: 2026-01-06 15:52:49.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:52:49 compute-0 nova_compute[185513]: 2026-01-06 15:52:49.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:52:49 compute-0 nova_compute[185513]: 2026-01-06 15:52:49.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:52:49 compute-0 nova_compute[185513]: 2026-01-06 15:52:49.101 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:52:49 compute-0 nova_compute[185513]: 2026-01-06 15:52:49.102 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:52:49 compute-0 nova_compute[185513]: 2026-01-06 15:52:49.102 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:52:49 compute-0 nova_compute[185513]: 2026-01-06 15:52:49.103 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:52:49 compute-0 nova_compute[185513]: 2026-01-06 15:52:49.654 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:52:49 compute-0 nova_compute[185513]: 2026-01-06 15:52:49.656 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5701MB free_disk=72.47979354858398GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:52:49 compute-0 nova_compute[185513]: 2026-01-06 15:52:49.657 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:52:49 compute-0 nova_compute[185513]: 2026-01-06 15:52:49.658 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:52:49 compute-0 nova_compute[185513]: 2026-01-06 15:52:49.759 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:52:49 compute-0 nova_compute[185513]: 2026-01-06 15:52:49.759 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:52:49 compute-0 nova_compute[185513]: 2026-01-06 15:52:49.797 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:52:49 compute-0 nova_compute[185513]: 2026-01-06 15:52:49.817 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:52:49 compute-0 nova_compute[185513]: 2026-01-06 15:52:49.819 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:52:49 compute-0 nova_compute[185513]: 2026-01-06 15:52:49.820 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:52:49 compute-0 podman[244524]: 2026-01-06 15:52:49.850791194 +0000 UTC m=+0.103052082 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 15:52:49 compute-0 podman[244523]: 2026-01-06 15:52:49.87371894 +0000 UTC m=+0.134804749 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi)
Jan 06 15:52:52 compute-0 podman[244565]: 2026-01-06 15:52:52.87886337 +0000 UTC m=+0.139290801 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release=1214.1726694543, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_id=kepler, maintainer=Red Hat, Inc., name=ubi9, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 15:52:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:52:53.700 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:52:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:52:53.701 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:52:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:52:53.702 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:52:53 compute-0 nova_compute[185513]: 2026-01-06 15:52:53.821 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:52:53 compute-0 nova_compute[185513]: 2026-01-06 15:52:53.824 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:52:53 compute-0 nova_compute[185513]: 2026-01-06 15:52:53.825 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:52:53 compute-0 nova_compute[185513]: 2026-01-06 15:52:53.848 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:52:55 compute-0 nova_compute[185513]: 2026-01-06 15:52:55.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:52:57 compute-0 podman[244585]: 2026-01-06 15:52:57.858700286 +0000 UTC m=+0.111092372 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 15:52:57 compute-0 podman[244584]: 2026-01-06 15:52:57.908402502 +0000 UTC m=+0.167969044 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 06 15:52:59 compute-0 podman[201918]: time="2026-01-06T15:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:52:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:52:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3898 "" "Go-http-client/1.1"
Jan 06 15:53:00 compute-0 nova_compute[185513]: 2026-01-06 15:53:00.019 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:53:01 compute-0 openstack_network_exporter[205258]: ERROR   15:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:53:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:53:01 compute-0 openstack_network_exporter[205258]: ERROR   15:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:53:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:53:10 compute-0 podman[244629]: 2026-01-06 15:53:10.874815907 +0000 UTC m=+0.130007468 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 06 15:53:12 compute-0 podman[244648]: 2026-01-06 15:53:12.845659437 +0000 UTC m=+0.103344390 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251224, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Jan 06 15:53:14 compute-0 podman[244666]: 2026-01-06 15:53:14.806269379 +0000 UTC m=+0.105387176 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, name=ubi9-minimal, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 06 15:53:20 compute-0 podman[244687]: 2026-01-06 15:53:20.851379699 +0000 UTC m=+0.105342346 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 06 15:53:20 compute-0 podman[244686]: 2026-01-06 15:53:20.851559004 +0000 UTC m=+0.109838089 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 06 15:53:23 compute-0 podman[244728]: 2026-01-06 15:53:23.902079454 +0000 UTC m=+0.156682156 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=, release=1214.1726694543, managed_by=edpm_ansible, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, name=ubi9, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container)
Jan 06 15:53:28 compute-0 podman[244749]: 2026-01-06 15:53:28.867290122 +0000 UTC m=+0.115791742 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 15:53:28 compute-0 podman[244748]: 2026-01-06 15:53:28.899008517 +0000 UTC m=+0.165185720 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 06 15:53:29 compute-0 podman[201918]: time="2026-01-06T15:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:53:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:53:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3901 "" "Go-http-client/1.1"
Jan 06 15:53:31 compute-0 openstack_network_exporter[205258]: ERROR   15:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:53:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:53:31 compute-0 openstack_network_exporter[205258]: ERROR   15:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:53:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.077 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.077 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.078 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.086 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.086 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.086 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.086 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.086 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.086 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.087 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.087 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.087 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.087 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.087 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.088 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.088 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:53:33.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:53:41 compute-0 podman[244799]: 2026-01-06 15:53:41.877274807 +0000 UTC m=+0.128316712 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 06 15:53:43 compute-0 nova_compute[185513]: 2026-01-06 15:53:43.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:53:43 compute-0 podman[244818]: 2026-01-06 15:53:43.882823565 +0000 UTC m=+0.143295991 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 06 15:53:44 compute-0 nova_compute[185513]: 2026-01-06 15:53:44.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:53:45 compute-0 nova_compute[185513]: 2026-01-06 15:53:45.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:53:45 compute-0 nova_compute[185513]: 2026-01-06 15:53:45.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 06 15:53:45 compute-0 podman[244839]: 2026-01-06 15:53:45.846526491 +0000 UTC m=+0.105521880 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64)
Jan 06 15:53:46 compute-0 nova_compute[185513]: 2026-01-06 15:53:46.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:53:46 compute-0 nova_compute[185513]: 2026-01-06 15:53:46.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:53:47 compute-0 nova_compute[185513]: 2026-01-06 15:53:47.219 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:53:50 compute-0 nova_compute[185513]: 2026-01-06 15:53:50.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:53:50 compute-0 nova_compute[185513]: 2026-01-06 15:53:50.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:53:51 compute-0 nova_compute[185513]: 2026-01-06 15:53:51.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:53:51 compute-0 nova_compute[185513]: 2026-01-06 15:53:51.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:53:51 compute-0 nova_compute[185513]: 2026-01-06 15:53:51.061 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:53:51 compute-0 nova_compute[185513]: 2026-01-06 15:53:51.062 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:53:51 compute-0 nova_compute[185513]: 2026-01-06 15:53:51.063 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:53:51 compute-0 nova_compute[185513]: 2026-01-06 15:53:51.064 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:53:51 compute-0 nova_compute[185513]: 2026-01-06 15:53:51.636 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:53:51 compute-0 nova_compute[185513]: 2026-01-06 15:53:51.637 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5709MB free_disk=72.47883605957031GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:53:51 compute-0 nova_compute[185513]: 2026-01-06 15:53:51.638 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:53:51 compute-0 nova_compute[185513]: 2026-01-06 15:53:51.638 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:53:51 compute-0 nova_compute[185513]: 2026-01-06 15:53:51.772 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:53:51 compute-0 nova_compute[185513]: 2026-01-06 15:53:51.773 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:53:51 compute-0 podman[244860]: 2026-01-06 15:53:51.84651054 +0000 UTC m=+0.101398437 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 15:53:51 compute-0 podman[244859]: 2026-01-06 15:53:51.849225925 +0000 UTC m=+0.121694762 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, tcib_managed=true)
Jan 06 15:53:51 compute-0 nova_compute[185513]: 2026-01-06 15:53:51.855 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing inventories for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 06 15:53:51 compute-0 nova_compute[185513]: 2026-01-06 15:53:51.942 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating ProviderTree inventory for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 06 15:53:51 compute-0 nova_compute[185513]: 2026-01-06 15:53:51.942 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating inventory in ProviderTree for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 06 15:53:51 compute-0 nova_compute[185513]: 2026-01-06 15:53:51.959 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing aggregate associations for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 06 15:53:52 compute-0 nova_compute[185513]: 2026-01-06 15:53:52.001 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing trait associations for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE41,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 06 15:53:52 compute-0 nova_compute[185513]: 2026-01-06 15:53:52.039 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:53:52 compute-0 nova_compute[185513]: 2026-01-06 15:53:52.066 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:53:52 compute-0 nova_compute[185513]: 2026-01-06 15:53:52.068 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:53:52 compute-0 nova_compute[185513]: 2026-01-06 15:53:52.069 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.430s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:53:52 compute-0 nova_compute[185513]: 2026-01-06 15:53:52.069 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:53:52 compute-0 nova_compute[185513]: 2026-01-06 15:53:52.070 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 06 15:53:52 compute-0 nova_compute[185513]: 2026-01-06 15:53:52.084 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 06 15:53:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:53:53.701 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:53:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:53:53.702 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:53:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:53:53.703 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:53:54 compute-0 nova_compute[185513]: 2026-01-06 15:53:54.084 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:53:54 compute-0 nova_compute[185513]: 2026-01-06 15:53:54.084 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:53:54 compute-0 nova_compute[185513]: 2026-01-06 15:53:54.085 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:53:54 compute-0 nova_compute[185513]: 2026-01-06 15:53:54.104 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:53:54 compute-0 podman[244903]: 2026-01-06 15:53:54.866353241 +0000 UTC m=+0.120559560 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release-0.7.12=, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, config_id=kepler, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.4)
Jan 06 15:53:57 compute-0 nova_compute[185513]: 2026-01-06 15:53:57.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:53:59 compute-0 podman[201918]: time="2026-01-06T15:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:53:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:53:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3896 "" "Go-http-client/1.1"
Jan 06 15:53:59 compute-0 podman[244923]: 2026-01-06 15:53:59.871806656 +0000 UTC m=+0.123926372 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 06 15:53:59 compute-0 podman[244922]: 2026-01-06 15:53:59.921779269 +0000 UTC m=+0.189578183 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:54:01 compute-0 openstack_network_exporter[205258]: ERROR   15:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:54:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:54:01 compute-0 openstack_network_exporter[205258]: ERROR   15:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:54:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:54:12 compute-0 sshd-session[244971]: Connection closed by 87.236.176.183 port 56565
Jan 06 15:54:12 compute-0 sshd-session[244972]: Connection closed by 87.236.176.183 port 45285 [preauth]
Jan 06 15:54:12 compute-0 podman[244974]: 2026-01-06 15:54:12.849419607 +0000 UTC m=+0.107952776 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Jan 06 15:54:14 compute-0 podman[244993]: 2026-01-06 15:54:14.794206187 +0000 UTC m=+0.111248516 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Jan 06 15:54:16 compute-0 podman[245013]: 2026-01-06 15:54:16.852089983 +0000 UTC m=+0.110255359 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, container_name=openstack_network_exporter, name=ubi9-minimal, version=9.6)
Jan 06 15:54:22 compute-0 podman[245035]: 2026-01-06 15:54:22.84721928 +0000 UTC m=+0.095750903 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 06 15:54:22 compute-0 podman[245036]: 2026-01-06 15:54:22.851354383 +0000 UTC m=+0.093142052 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 06 15:54:25 compute-0 podman[245076]: 2026-01-06 15:54:25.868669765 +0000 UTC m=+0.128086486 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, config_id=kepler, distribution-scope=public, name=ubi9, io.openshift.expose-services=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 06 15:54:29 compute-0 podman[201918]: time="2026-01-06T15:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:54:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:54:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3897 "" "Go-http-client/1.1"
Jan 06 15:54:30 compute-0 podman[245097]: 2026-01-06 15:54:30.856074738 +0000 UTC m=+0.104727239 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 06 15:54:30 compute-0 podman[245096]: 2026-01-06 15:54:30.949747763 +0000 UTC m=+0.205102537 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 06 15:54:31 compute-0 openstack_network_exporter[205258]: ERROR   15:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:54:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:54:31 compute-0 openstack_network_exporter[205258]: ERROR   15:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:54:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:54:43 compute-0 podman[245145]: 2026-01-06 15:54:43.866832875 +0000 UTC m=+0.126456232 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 06 15:54:44 compute-0 nova_compute[185513]: 2026-01-06 15:54:44.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:54:45 compute-0 nova_compute[185513]: 2026-01-06 15:54:45.020 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:54:45 compute-0 podman[245164]: 2026-01-06 15:54:45.850398009 +0000 UTC m=+0.113775079 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224)
Jan 06 15:54:47 compute-0 nova_compute[185513]: 2026-01-06 15:54:47.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:54:47 compute-0 podman[245183]: 2026-01-06 15:54:47.879449817 +0000 UTC m=+0.135047770 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_id=openstack_network_exporter, vcs-type=git, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, io.buildah.version=1.33.7)
Jan 06 15:54:49 compute-0 nova_compute[185513]: 2026-01-06 15:54:49.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:54:51 compute-0 nova_compute[185513]: 2026-01-06 15:54:51.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:54:51 compute-0 nova_compute[185513]: 2026-01-06 15:54:51.068 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:54:51 compute-0 nova_compute[185513]: 2026-01-06 15:54:51.069 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:54:51 compute-0 nova_compute[185513]: 2026-01-06 15:54:51.069 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:54:51 compute-0 nova_compute[185513]: 2026-01-06 15:54:51.070 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:54:51 compute-0 nova_compute[185513]: 2026-01-06 15:54:51.651 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:54:51 compute-0 nova_compute[185513]: 2026-01-06 15:54:51.652 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5701MB free_disk=72.47962951660156GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:54:51 compute-0 nova_compute[185513]: 2026-01-06 15:54:51.652 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:54:51 compute-0 nova_compute[185513]: 2026-01-06 15:54:51.653 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:54:51 compute-0 nova_compute[185513]: 2026-01-06 15:54:51.743 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:54:51 compute-0 nova_compute[185513]: 2026-01-06 15:54:51.743 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:54:51 compute-0 nova_compute[185513]: 2026-01-06 15:54:51.771 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:54:51 compute-0 nova_compute[185513]: 2026-01-06 15:54:51.792 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:54:51 compute-0 nova_compute[185513]: 2026-01-06 15:54:51.794 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:54:51 compute-0 nova_compute[185513]: 2026-01-06 15:54:51.795 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:54:52 compute-0 nova_compute[185513]: 2026-01-06 15:54:52.795 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:54:52 compute-0 nova_compute[185513]: 2026-01-06 15:54:52.796 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:54:52 compute-0 nova_compute[185513]: 2026-01-06 15:54:52.796 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:54:53 compute-0 nova_compute[185513]: 2026-01-06 15:54:53.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:54:53 compute-0 nova_compute[185513]: 2026-01-06 15:54:53.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:54:53 compute-0 nova_compute[185513]: 2026-01-06 15:54:53.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:54:53 compute-0 nova_compute[185513]: 2026-01-06 15:54:53.041 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:54:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:54:53.703 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:54:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:54:53.704 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:54:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:54:53.705 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:54:53 compute-0 podman[245204]: 2026-01-06 15:54:53.861482857 +0000 UTC m=+0.120681721 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 06 15:54:53 compute-0 podman[245205]: 2026-01-06 15:54:53.88387107 +0000 UTC m=+0.141372227 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 15:54:56 compute-0 podman[245245]: 2026-01-06 15:54:56.871056703 +0000 UTC m=+0.124164668 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.openshift.tags=base rhel9, distribution-scope=public, version=9.4, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, maintainer=Red Hat, Inc.)
Jan 06 15:54:59 compute-0 nova_compute[185513]: 2026-01-06 15:54:59.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:54:59 compute-0 podman[201918]: time="2026-01-06T15:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:54:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:54:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3901 "" "Go-http-client/1.1"
Jan 06 15:55:01 compute-0 nova_compute[185513]: 2026-01-06 15:55:01.019 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:55:01 compute-0 openstack_network_exporter[205258]: ERROR   15:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:55:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:55:01 compute-0 openstack_network_exporter[205258]: ERROR   15:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:55:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:55:01 compute-0 podman[245265]: 2026-01-06 15:55:01.85333661 +0000 UTC m=+0.107216256 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 06 15:55:01 compute-0 podman[245264]: 2026-01-06 15:55:01.919468261 +0000 UTC m=+0.180070254 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 06 15:55:14 compute-0 podman[245309]: 2026-01-06 15:55:14.776106098 +0000 UTC m=+0.090178242 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 06 15:55:16 compute-0 podman[245328]: 2026-01-06 15:55:16.856045264 +0000 UTC m=+0.108523452 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 06 15:55:18 compute-0 podman[245347]: 2026-01-06 15:55:18.830007728 +0000 UTC m=+0.103111332 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 06 15:55:24 compute-0 podman[245369]: 2026-01-06 15:55:24.875330749 +0000 UTC m=+0.130960277 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 15:55:24 compute-0 podman[245370]: 2026-01-06 15:55:24.897314401 +0000 UTC m=+0.143014223 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 06 15:55:27 compute-0 podman[245411]: 2026-01-06 15:55:27.882002624 +0000 UTC m=+0.144810522 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-container, config_id=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, container_name=kepler, maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 06 15:55:29 compute-0 podman[201918]: time="2026-01-06T15:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:55:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:55:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3893 "" "Go-http-client/1.1"
Jan 06 15:55:31 compute-0 openstack_network_exporter[205258]: ERROR   15:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:55:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:55:31 compute-0 openstack_network_exporter[205258]: ERROR   15:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:55:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:55:32 compute-0 podman[245430]: 2026-01-06 15:55:32.857954433 +0000 UTC m=+0.109992184 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 15:55:32 compute-0 podman[245429]: 2026-01-06 15:55:32.919752153 +0000 UTC m=+0.178073379 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.078 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.079 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.084 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.084 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.089 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.090 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.090 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.091 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.091 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.092 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.092 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.082 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:55:33.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:55:45 compute-0 nova_compute[185513]: 2026-01-06 15:55:45.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:55:45 compute-0 podman[245476]: 2026-01-06 15:55:45.820758107 +0000 UTC m=+0.082824567 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 06 15:55:46 compute-0 nova_compute[185513]: 2026-01-06 15:55:46.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:55:47 compute-0 nova_compute[185513]: 2026-01-06 15:55:47.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:55:47 compute-0 podman[245494]: 2026-01-06 15:55:47.835900698 +0000 UTC m=+0.097896696 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 06 15:55:49 compute-0 nova_compute[185513]: 2026-01-06 15:55:49.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:55:49 compute-0 podman[245513]: 2026-01-06 15:55:49.896560208 +0000 UTC m=+0.155489199 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 06 15:55:52 compute-0 nova_compute[185513]: 2026-01-06 15:55:52.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:55:53 compute-0 nova_compute[185513]: 2026-01-06 15:55:53.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:55:53 compute-0 nova_compute[185513]: 2026-01-06 15:55:53.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:55:53 compute-0 nova_compute[185513]: 2026-01-06 15:55:53.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:55:53 compute-0 nova_compute[185513]: 2026-01-06 15:55:53.056 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:55:53 compute-0 nova_compute[185513]: 2026-01-06 15:55:53.056 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:55:53 compute-0 nova_compute[185513]: 2026-01-06 15:55:53.056 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:55:53 compute-0 nova_compute[185513]: 2026-01-06 15:55:53.057 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:55:53 compute-0 nova_compute[185513]: 2026-01-06 15:55:53.533 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:55:53 compute-0 nova_compute[185513]: 2026-01-06 15:55:53.535 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5678MB free_disk=72.47962951660156GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:55:53 compute-0 nova_compute[185513]: 2026-01-06 15:55:53.535 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:55:53 compute-0 nova_compute[185513]: 2026-01-06 15:55:53.536 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:55:53 compute-0 nova_compute[185513]: 2026-01-06 15:55:53.638 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:55:53 compute-0 nova_compute[185513]: 2026-01-06 15:55:53.639 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:55:53 compute-0 nova_compute[185513]: 2026-01-06 15:55:53.666 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:55:53 compute-0 nova_compute[185513]: 2026-01-06 15:55:53.683 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:55:53 compute-0 nova_compute[185513]: 2026-01-06 15:55:53.686 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:55:53 compute-0 nova_compute[185513]: 2026-01-06 15:55:53.687 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:55:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:55:53.705 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:55:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:55:53.706 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:55:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:55:53.706 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:55:54 compute-0 nova_compute[185513]: 2026-01-06 15:55:54.689 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:55:54 compute-0 nova_compute[185513]: 2026-01-06 15:55:54.690 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:55:54 compute-0 nova_compute[185513]: 2026-01-06 15:55:54.691 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:55:54 compute-0 nova_compute[185513]: 2026-01-06 15:55:54.708 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:55:55 compute-0 podman[245533]: 2026-01-06 15:55:55.860543485 +0000 UTC m=+0.117739309 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 06 15:55:55 compute-0 podman[245532]: 2026-01-06 15:55:55.885820889 +0000 UTC m=+0.137536980 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS)
Jan 06 15:55:58 compute-0 podman[245574]: 2026-01-06 15:55:58.858079676 +0000 UTC m=+0.114724425 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, distribution-scope=public, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, container_name=kepler, io.openshift.expose-services=, release-0.7.12=, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.component=ubi9-container, config_id=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 06 15:55:59 compute-0 nova_compute[185513]: 2026-01-06 15:55:59.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:55:59 compute-0 podman[201918]: time="2026-01-06T15:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:55:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:55:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3896 "" "Go-http-client/1.1"
Jan 06 15:56:01 compute-0 openstack_network_exporter[205258]: ERROR   15:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:56:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:56:01 compute-0 openstack_network_exporter[205258]: ERROR   15:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:56:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:56:03 compute-0 podman[245594]: 2026-01-06 15:56:03.851889063 +0000 UTC m=+0.100183760 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 06 15:56:03 compute-0 podman[245593]: 2026-01-06 15:56:03.926678396 +0000 UTC m=+0.192870571 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2)
Jan 06 15:56:16 compute-0 podman[245639]: 2026-01-06 15:56:16.889659425 +0000 UTC m=+0.149131483 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 06 15:56:18 compute-0 podman[245658]: 2026-01-06 15:56:18.859710461 +0000 UTC m=+0.118961453 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 06 15:56:20 compute-0 podman[245677]: 2026-01-06 15:56:20.866657625 +0000 UTC m=+0.121822963 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, distribution-scope=public, version=9.6, config_id=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 06 15:56:26 compute-0 podman[245700]: 2026-01-06 15:56:26.867478797 +0000 UTC m=+0.118043038 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 06 15:56:26 compute-0 podman[245699]: 2026-01-06 15:56:26.882664139 +0000 UTC m=+0.138306921 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Jan 06 15:56:29 compute-0 podman[201918]: time="2026-01-06T15:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:56:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:56:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3891 "" "Go-http-client/1.1"
Jan 06 15:56:29 compute-0 podman[245738]: 2026-01-06 15:56:29.859086162 +0000 UTC m=+0.116531695 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, name=ubi9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, maintainer=Red Hat, Inc., release=1214.1726694543, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=kepler, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, distribution-scope=public)
Jan 06 15:56:31 compute-0 openstack_network_exporter[205258]: ERROR   15:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:56:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:56:31 compute-0 openstack_network_exporter[205258]: ERROR   15:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:56:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:56:34 compute-0 podman[245759]: 2026-01-06 15:56:34.84786524 +0000 UTC m=+0.104225092 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 06 15:56:34 compute-0 podman[245758]: 2026-01-06 15:56:34.92435915 +0000 UTC m=+0.186672208 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 06 15:56:45 compute-0 nova_compute[185513]: 2026-01-06 15:56:45.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:56:46 compute-0 nova_compute[185513]: 2026-01-06 15:56:46.017 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:56:47 compute-0 podman[245805]: 2026-01-06 15:56:47.866684914 +0000 UTC m=+0.125688460 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 06 15:56:49 compute-0 nova_compute[185513]: 2026-01-06 15:56:49.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:56:49 compute-0 podman[245825]: 2026-01-06 15:56:49.8833868 +0000 UTC m=+0.144794243 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 06 15:56:51 compute-0 nova_compute[185513]: 2026-01-06 15:56:51.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:56:51 compute-0 podman[245843]: 2026-01-06 15:56:51.849069463 +0000 UTC m=+0.104377606 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 06 15:56:53 compute-0 nova_compute[185513]: 2026-01-06 15:56:53.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:56:53 compute-0 nova_compute[185513]: 2026-01-06 15:56:53.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:56:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:56:53.707 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:56:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:56:53.708 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:56:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:56:53.708 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:56:54 compute-0 nova_compute[185513]: 2026-01-06 15:56:54.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:56:54 compute-0 nova_compute[185513]: 2026-01-06 15:56:54.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:56:54 compute-0 nova_compute[185513]: 2026-01-06 15:56:54.065 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:56:54 compute-0 nova_compute[185513]: 2026-01-06 15:56:54.066 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:56:54 compute-0 nova_compute[185513]: 2026-01-06 15:56:54.066 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:56:54 compute-0 nova_compute[185513]: 2026-01-06 15:56:54.067 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:56:54 compute-0 nova_compute[185513]: 2026-01-06 15:56:54.547 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:56:54 compute-0 nova_compute[185513]: 2026-01-06 15:56:54.548 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5689MB free_disk=72.47954940795898GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:56:54 compute-0 nova_compute[185513]: 2026-01-06 15:56:54.549 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:56:54 compute-0 nova_compute[185513]: 2026-01-06 15:56:54.549 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:56:54 compute-0 nova_compute[185513]: 2026-01-06 15:56:54.642 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:56:54 compute-0 nova_compute[185513]: 2026-01-06 15:56:54.644 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:56:54 compute-0 nova_compute[185513]: 2026-01-06 15:56:54.681 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:56:54 compute-0 nova_compute[185513]: 2026-01-06 15:56:54.702 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:56:54 compute-0 nova_compute[185513]: 2026-01-06 15:56:54.704 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:56:54 compute-0 nova_compute[185513]: 2026-01-06 15:56:54.704 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:56:55 compute-0 nova_compute[185513]: 2026-01-06 15:56:55.704 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:56:55 compute-0 nova_compute[185513]: 2026-01-06 15:56:55.705 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:56:55 compute-0 nova_compute[185513]: 2026-01-06 15:56:55.706 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:56:55 compute-0 nova_compute[185513]: 2026-01-06 15:56:55.727 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:56:57 compute-0 podman[245865]: 2026-01-06 15:56:57.79865586 +0000 UTC m=+0.067470889 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 15:56:57 compute-0 podman[245864]: 2026-01-06 15:56:57.802080575 +0000 UTC m=+0.074981978 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi)
Jan 06 15:56:59 compute-0 podman[201918]: time="2026-01-06T15:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:56:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:56:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3891 "" "Go-http-client/1.1"
Jan 06 15:57:00 compute-0 podman[245905]: 2026-01-06 15:57:00.839014053 +0000 UTC m=+0.110522217 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30)
Jan 06 15:57:01 compute-0 nova_compute[185513]: 2026-01-06 15:57:01.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:57:01 compute-0 openstack_network_exporter[205258]: ERROR   15:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:57:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:57:01 compute-0 openstack_network_exporter[205258]: ERROR   15:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:57:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:57:05 compute-0 nova_compute[185513]: 2026-01-06 15:57:05.020 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:57:05 compute-0 podman[245926]: 2026-01-06 15:57:05.85002632 +0000 UTC m=+0.109298014 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 06 15:57:05 compute-0 podman[245925]: 2026-01-06 15:57:05.900741662 +0000 UTC m=+0.163877694 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Jan 06 15:57:18 compute-0 podman[245977]: 2026-01-06 15:57:18.825518787 +0000 UTC m=+0.087742154 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 06 15:57:20 compute-0 podman[245996]: 2026-01-06 15:57:20.862003092 +0000 UTC m=+0.123127929 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224)
Jan 06 15:57:22 compute-0 podman[246017]: 2026-01-06 15:57:22.892431119 +0000 UTC m=+0.146691805 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal)
Jan 06 15:57:28 compute-0 podman[246038]: 2026-01-06 15:57:28.840939866 +0000 UTC m=+0.105399646 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 15:57:28 compute-0 podman[246037]: 2026-01-06 15:57:28.850317227 +0000 UTC m=+0.120440824 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:57:29 compute-0 podman[201918]: time="2026-01-06T15:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:57:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:57:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3890 "" "Go-http-client/1.1"
Jan 06 15:57:31 compute-0 openstack_network_exporter[205258]: ERROR   15:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:57:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:57:31 compute-0 openstack_network_exporter[205258]: ERROR   15:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:57:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:57:31 compute-0 podman[246076]: 2026-01-06 15:57:31.887001017 +0000 UTC m=+0.143711292 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=kepler, io.buildah.version=1.29.0, release=1214.1726694543, name=ubi9, release-0.7.12=, vcs-type=git, version=9.4, container_name=kepler, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.079 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.079 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.084 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.084 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.088 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.089 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.089 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.090 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.090 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.090 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.090 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.090 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.091 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.092 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:57:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:57:36 compute-0 podman[246096]: 2026-01-06 15:57:36.879580078 +0000 UTC m=+0.127190441 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 06 15:57:36 compute-0 podman[246095]: 2026-01-06 15:57:36.931449992 +0000 UTC m=+0.185779672 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 06 15:57:46 compute-0 nova_compute[185513]: 2026-01-06 15:57:46.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:57:48 compute-0 nova_compute[185513]: 2026-01-06 15:57:48.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:57:49 compute-0 podman[246143]: 2026-01-06 15:57:49.863039884 +0000 UTC m=+0.120097045 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 06 15:57:50 compute-0 nova_compute[185513]: 2026-01-06 15:57:50.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:57:51 compute-0 nova_compute[185513]: 2026-01-06 15:57:51.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:57:51 compute-0 podman[246163]: 2026-01-06 15:57:51.884647176 +0000 UTC m=+0.140965445 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:57:53 compute-0 nova_compute[185513]: 2026-01-06 15:57:53.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:57:53 compute-0 nova_compute[185513]: 2026-01-06 15:57:53.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:57:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:57:53.709 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:57:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:57:53.710 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:57:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:57:53.710 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:57:53 compute-0 podman[246184]: 2026-01-06 15:57:53.900742884 +0000 UTC m=+0.158905145 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9)
Jan 06 15:57:54 compute-0 nova_compute[185513]: 2026-01-06 15:57:54.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:57:55 compute-0 nova_compute[185513]: 2026-01-06 15:57:55.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:57:55 compute-0 nova_compute[185513]: 2026-01-06 15:57:55.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:57:55 compute-0 nova_compute[185513]: 2026-01-06 15:57:55.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:57:55 compute-0 nova_compute[185513]: 2026-01-06 15:57:55.232 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:57:55 compute-0 nova_compute[185513]: 2026-01-06 15:57:55.233 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:57:55 compute-0 nova_compute[185513]: 2026-01-06 15:57:55.271 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:57:55 compute-0 nova_compute[185513]: 2026-01-06 15:57:55.272 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:57:55 compute-0 nova_compute[185513]: 2026-01-06 15:57:55.272 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:57:55 compute-0 nova_compute[185513]: 2026-01-06 15:57:55.273 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:57:55 compute-0 nova_compute[185513]: 2026-01-06 15:57:55.712 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:57:55 compute-0 nova_compute[185513]: 2026-01-06 15:57:55.713 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5699MB free_disk=72.47954940795898GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:57:55 compute-0 nova_compute[185513]: 2026-01-06 15:57:55.713 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:57:55 compute-0 nova_compute[185513]: 2026-01-06 15:57:55.714 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:57:55 compute-0 nova_compute[185513]: 2026-01-06 15:57:55.800 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:57:55 compute-0 nova_compute[185513]: 2026-01-06 15:57:55.801 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:57:55 compute-0 nova_compute[185513]: 2026-01-06 15:57:55.837 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:57:55 compute-0 nova_compute[185513]: 2026-01-06 15:57:55.860 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:57:55 compute-0 nova_compute[185513]: 2026-01-06 15:57:55.863 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:57:55 compute-0 nova_compute[185513]: 2026-01-06 15:57:55.864 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:57:59 compute-0 podman[201918]: time="2026-01-06T15:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:57:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:57:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3898 "" "Go-http-client/1.1"
Jan 06 15:57:59 compute-0 podman[246205]: 2026-01-06 15:57:59.854022173 +0000 UTC m=+0.117122282 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 06 15:57:59 compute-0 podman[246206]: 2026-01-06 15:57:59.862468518 +0000 UTC m=+0.115408144 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 06 15:58:01 compute-0 openstack_network_exporter[205258]: ERROR   15:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:58:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:58:01 compute-0 openstack_network_exporter[205258]: ERROR   15:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:58:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:58:02 compute-0 nova_compute[185513]: 2026-01-06 15:58:02.654 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:58:02 compute-0 podman[246247]: 2026-01-06 15:58:02.858304861 +0000 UTC m=+0.122070339 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, distribution-scope=public, container_name=kepler, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 06 15:58:07 compute-0 podman[246267]: 2026-01-06 15:58:07.836570707 +0000 UTC m=+0.094668246 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 06 15:58:07 compute-0 podman[246266]: 2026-01-06 15:58:07.89271143 +0000 UTC m=+0.162906296 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 06 15:58:20 compute-0 podman[246317]: 2026-01-06 15:58:20.858467714 +0000 UTC m=+0.123535490 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 06 15:58:22 compute-0 podman[246336]: 2026-01-06 15:58:22.856631264 +0000 UTC m=+0.115710603 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Jan 06 15:58:24 compute-0 podman[246355]: 2026-01-06 15:58:24.823823279 +0000 UTC m=+0.088818544 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., container_name=openstack_network_exporter, architecture=x86_64, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=openstack_network_exporter, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 06 15:58:29 compute-0 podman[201918]: time="2026-01-06T15:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:58:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:58:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3896 "" "Go-http-client/1.1"
Jan 06 15:58:30 compute-0 podman[246376]: 2026-01-06 15:58:30.864686556 +0000 UTC m=+0.113658285 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 15:58:30 compute-0 podman[246375]: 2026-01-06 15:58:30.878400678 +0000 UTC m=+0.133805856 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:58:31 compute-0 openstack_network_exporter[205258]: ERROR   15:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:58:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:58:31 compute-0 openstack_network_exporter[205258]: ERROR   15:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:58:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:58:33 compute-0 podman[246415]: 2026-01-06 15:58:33.899460674 +0000 UTC m=+0.156369915 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., managed_by=edpm_ansible, config_id=kepler, container_name=kepler, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, name=ubi9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 06 15:58:38 compute-0 podman[246436]: 2026-01-06 15:58:38.845077609 +0000 UTC m=+0.095827939 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 15:58:38 compute-0 podman[246435]: 2026-01-06 15:58:38.973931586 +0000 UTC m=+0.224610914 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 06 15:58:46 compute-0 nova_compute[185513]: 2026-01-06 15:58:46.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:58:46 compute-0 nova_compute[185513]: 2026-01-06 15:58:46.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:58:46 compute-0 nova_compute[185513]: 2026-01-06 15:58:46.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 06 15:58:49 compute-0 nova_compute[185513]: 2026-01-06 15:58:49.036 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:58:51 compute-0 nova_compute[185513]: 2026-01-06 15:58:51.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:58:51 compute-0 podman[246483]: 2026-01-06 15:58:51.861552207 +0000 UTC m=+0.122109200 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 15:58:52 compute-0 nova_compute[185513]: 2026-01-06 15:58:52.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:58:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:58:53.710 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:58:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:58:53.712 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:58:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:58:53.712 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:58:53 compute-0 podman[246502]: 2026-01-06 15:58:53.842845305 +0000 UTC m=+0.108645365 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 06 15:58:54 compute-0 nova_compute[185513]: 2026-01-06 15:58:54.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:58:54 compute-0 nova_compute[185513]: 2026-01-06 15:58:54.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:58:55 compute-0 nova_compute[185513]: 2026-01-06 15:58:55.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:58:55 compute-0 podman[246520]: 2026-01-06 15:58:55.834854993 +0000 UTC m=+0.092422374 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vendor=Red Hat, Inc., distribution-scope=public, version=9.6, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal)
Jan 06 15:58:56 compute-0 nova_compute[185513]: 2026-01-06 15:58:56.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:58:58 compute-0 nova_compute[185513]: 2026-01-06 15:58:58.295 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:58:58 compute-0 nova_compute[185513]: 2026-01-06 15:58:58.296 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:58:58 compute-0 nova_compute[185513]: 2026-01-06 15:58:58.296 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:58:58 compute-0 nova_compute[185513]: 2026-01-06 15:58:58.297 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:58:58 compute-0 nova_compute[185513]: 2026-01-06 15:58:58.678 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:58:58 compute-0 nova_compute[185513]: 2026-01-06 15:58:58.679 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5709MB free_disk=72.47932434082031GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:58:58 compute-0 nova_compute[185513]: 2026-01-06 15:58:58.679 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:58:58 compute-0 nova_compute[185513]: 2026-01-06 15:58:58.680 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:58:59 compute-0 nova_compute[185513]: 2026-01-06 15:58:59.081 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:58:59 compute-0 nova_compute[185513]: 2026-01-06 15:58:59.082 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:58:59 compute-0 nova_compute[185513]: 2026-01-06 15:58:59.148 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing inventories for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 06 15:58:59 compute-0 nova_compute[185513]: 2026-01-06 15:58:59.221 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating ProviderTree inventory for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 06 15:58:59 compute-0 nova_compute[185513]: 2026-01-06 15:58:59.222 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating inventory in ProviderTree for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 06 15:58:59 compute-0 nova_compute[185513]: 2026-01-06 15:58:59.235 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing aggregate associations for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 06 15:58:59 compute-0 nova_compute[185513]: 2026-01-06 15:58:59.266 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing trait associations for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE41,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 06 15:58:59 compute-0 nova_compute[185513]: 2026-01-06 15:58:59.291 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:58:59 compute-0 nova_compute[185513]: 2026-01-06 15:58:59.308 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:58:59 compute-0 nova_compute[185513]: 2026-01-06 15:58:59.312 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:58:59 compute-0 nova_compute[185513]: 2026-01-06 15:58:59.312 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:58:59 compute-0 nova_compute[185513]: 2026-01-06 15:58:59.313 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:58:59 compute-0 nova_compute[185513]: 2026-01-06 15:58:59.313 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 06 15:58:59 compute-0 nova_compute[185513]: 2026-01-06 15:58:59.429 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 06 15:58:59 compute-0 nova_compute[185513]: 2026-01-06 15:58:59.431 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:58:59 compute-0 podman[201918]: time="2026-01-06T15:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:58:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:58:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3901 "" "Go-http-client/1.1"
Jan 06 15:59:00 compute-0 nova_compute[185513]: 2026-01-06 15:59:00.444 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:59:00 compute-0 nova_compute[185513]: 2026-01-06 15:59:00.445 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:59:00 compute-0 nova_compute[185513]: 2026-01-06 15:59:00.445 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:59:00 compute-0 nova_compute[185513]: 2026-01-06 15:59:00.465 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:59:01 compute-0 openstack_network_exporter[205258]: ERROR   15:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:59:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:59:01 compute-0 openstack_network_exporter[205258]: ERROR   15:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:59:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:59:01 compute-0 podman[246541]: 2026-01-06 15:59:01.809852736 +0000 UTC m=+0.080838524 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 06 15:59:01 compute-0 podman[246542]: 2026-01-06 15:59:01.832065822 +0000 UTC m=+0.090578480 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 15:59:04 compute-0 nova_compute[185513]: 2026-01-06 15:59:04.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:59:04 compute-0 podman[246585]: 2026-01-06 15:59:04.861964178 +0000 UTC m=+0.119568300 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2024-09-18T21:23:30, config_id=kepler, maintainer=Red Hat, Inc., vcs-type=git, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 06 15:59:06 compute-0 nova_compute[185513]: 2026-01-06 15:59:06.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:59:09 compute-0 podman[246606]: 2026-01-06 15:59:09.857469994 +0000 UTC m=+0.110458962 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 06 15:59:09 compute-0 podman[246605]: 2026-01-06 15:59:09.929807415 +0000 UTC m=+0.184019056 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 06 15:59:20 compute-0 nova_compute[185513]: 2026-01-06 15:59:20.677 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:59:22 compute-0 podman[246655]: 2026-01-06 15:59:22.833714995 +0000 UTC m=+0.089002487 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 06 15:59:23 compute-0 nova_compute[185513]: 2026-01-06 15:59:23.657 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:59:24 compute-0 podman[246673]: 2026-01-06 15:59:24.839244234 +0000 UTC m=+0.103184903 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251224, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Jan 06 15:59:26 compute-0 podman[246693]: 2026-01-06 15:59:26.888698998 +0000 UTC m=+0.149669100 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, version=9.6, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 06 15:59:29 compute-0 podman[201918]: time="2026-01-06T15:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:59:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:59:29 compute-0 podman[201918]: @ - - [06/Jan/2026:15:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3895 "" "Go-http-client/1.1"
Jan 06 15:59:31 compute-0 openstack_network_exporter[205258]: ERROR   15:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 15:59:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:59:31 compute-0 openstack_network_exporter[205258]: ERROR   15:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 15:59:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 15:59:32 compute-0 podman[246714]: 2026-01-06 15:59:32.845847121 +0000 UTC m=+0.109802043 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 15:59:32 compute-0 podman[246713]: 2026-01-06 15:59:32.859769191 +0000 UTC m=+0.124026981 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.080 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.081 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.084 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.084 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.084 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.084 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.084 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.089 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.088 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.089 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.090 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.090 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.090 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.090 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.090 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.096 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.096 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 15:59:33.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 15:59:35 compute-0 podman[246754]: 2026-01-06 15:59:35.869997241 +0000 UTC m=+0.137134308 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, container_name=kepler, vcs-type=git, architecture=x86_64, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release=1214.1726694543, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible)
Jan 06 15:59:40 compute-0 podman[246775]: 2026-01-06 15:59:40.820673633 +0000 UTC m=+0.075846038 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 15:59:40 compute-0 podman[246774]: 2026-01-06 15:59:40.902774831 +0000 UTC m=+0.168126523 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 06 15:59:46 compute-0 nova_compute[185513]: 2026-01-06 15:59:46.053 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:59:49 compute-0 nova_compute[185513]: 2026-01-06 15:59:49.020 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:59:51 compute-0 nova_compute[185513]: 2026-01-06 15:59:51.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:59:53 compute-0 nova_compute[185513]: 2026-01-06 15:59:53.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:59:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:59:53.712 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:59:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:59:53.714 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:59:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 15:59:53.715 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:59:53 compute-0 podman[246821]: 2026-01-06 15:59:53.882115506 +0000 UTC m=+0.146462793 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 06 15:59:55 compute-0 nova_compute[185513]: 2026-01-06 15:59:55.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:59:55 compute-0 nova_compute[185513]: 2026-01-06 15:59:55.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 15:59:55 compute-0 podman[246838]: 2026-01-06 15:59:55.867334121 +0000 UTC m=+0.124139194 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, org.label-schema.build-date=20251224)
Jan 06 15:59:57 compute-0 nova_compute[185513]: 2026-01-06 15:59:57.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:59:57 compute-0 podman[246857]: 2026-01-06 15:59:57.870118504 +0000 UTC m=+0.132185634 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, name=ubi9-minimal, release=1755695350, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, distribution-scope=public)
Jan 06 15:59:58 compute-0 nova_compute[185513]: 2026-01-06 15:59:58.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:59:58 compute-0 nova_compute[185513]: 2026-01-06 15:59:58.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 15:59:58 compute-0 nova_compute[185513]: 2026-01-06 15:59:58.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 15:59:58 compute-0 nova_compute[185513]: 2026-01-06 15:59:58.325 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 15:59:58 compute-0 nova_compute[185513]: 2026-01-06 15:59:58.325 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 15:59:58 compute-0 nova_compute[185513]: 2026-01-06 15:59:58.353 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:59:58 compute-0 nova_compute[185513]: 2026-01-06 15:59:58.354 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:59:58 compute-0 nova_compute[185513]: 2026-01-06 15:59:58.354 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:59:58 compute-0 nova_compute[185513]: 2026-01-06 15:59:58.354 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 15:59:58 compute-0 nova_compute[185513]: 2026-01-06 15:59:58.856 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 15:59:58 compute-0 nova_compute[185513]: 2026-01-06 15:59:58.859 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5686MB free_disk=72.47930526733398GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 15:59:58 compute-0 nova_compute[185513]: 2026-01-06 15:59:58.860 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 15:59:58 compute-0 nova_compute[185513]: 2026-01-06 15:59:58.861 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 15:59:59 compute-0 nova_compute[185513]: 2026-01-06 15:59:59.045 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 15:59:59 compute-0 nova_compute[185513]: 2026-01-06 15:59:59.046 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 15:59:59 compute-0 nova_compute[185513]: 2026-01-06 15:59:59.085 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 15:59:59 compute-0 nova_compute[185513]: 2026-01-06 15:59:59.192 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 15:59:59 compute-0 nova_compute[185513]: 2026-01-06 15:59:59.196 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 15:59:59 compute-0 nova_compute[185513]: 2026-01-06 15:59:59.197 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.336s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 15:59:59 compute-0 podman[201918]: time="2026-01-06T15:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 15:59:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 15:59:59 compute-0 podman[201918]: @ - - [06/Jan/2026:15:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3895 "" "Go-http-client/1.1"
Jan 06 16:00:01 compute-0 openstack_network_exporter[205258]: ERROR   16:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:00:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:00:01 compute-0 openstack_network_exporter[205258]: ERROR   16:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:00:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:00:03 compute-0 podman[246880]: 2026-01-06 16:00:03.84785331 +0000 UTC m=+0.091260328 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 16:00:03 compute-0 podman[246879]: 2026-01-06 16:00:03.849828134 +0000 UTC m=+0.116639840 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 16:00:05 compute-0 nova_compute[185513]: 2026-01-06 16:00:05.896 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:00:06 compute-0 podman[246922]: 2026-01-06 16:00:06.852298132 +0000 UTC m=+0.113107933 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, release=1214.1726694543, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, vendor=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git)
Jan 06 16:00:11 compute-0 podman[246943]: 2026-01-06 16:00:11.865024008 +0000 UTC m=+0.107757878 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 06 16:00:11 compute-0 podman[246942]: 2026-01-06 16:00:11.900963707 +0000 UTC m=+0.157439541 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 16:00:24 compute-0 podman[246991]: 2026-01-06 16:00:24.891783075 +0000 UTC m=+0.145915328 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 06 16:00:26 compute-0 podman[247009]: 2026-01-06 16:00:26.865233918 +0000 UTC m=+0.121644186 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute)
Jan 06 16:00:28 compute-0 podman[247029]: 2026-01-06 16:00:28.837892151 +0000 UTC m=+0.097309433 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter)
Jan 06 16:00:29 compute-0 podman[201918]: time="2026-01-06T16:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:00:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:00:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3895 "" "Go-http-client/1.1"
Jan 06 16:00:31 compute-0 openstack_network_exporter[205258]: ERROR   16:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:00:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:00:31 compute-0 openstack_network_exporter[205258]: ERROR   16:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:00:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:00:34 compute-0 podman[247052]: 2026-01-06 16:00:34.85427507 +0000 UTC m=+0.112628751 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 06 16:00:34 compute-0 podman[247051]: 2026-01-06 16:00:34.878364066 +0000 UTC m=+0.133080658 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 06 16:00:37 compute-0 podman[247091]: 2026-01-06 16:00:37.861381904 +0000 UTC m=+0.121302886 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.expose-services=, version=9.4, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_id=kepler, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, release=1214.1726694543, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 06 16:00:42 compute-0 podman[247111]: 2026-01-06 16:00:42.858800091 +0000 UTC m=+0.111614843 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 16:00:42 compute-0 podman[247110]: 2026-01-06 16:00:42.935533552 +0000 UTC m=+0.193247237 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 06 16:00:47 compute-0 nova_compute[185513]: 2026-01-06 16:00:47.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:00:49 compute-0 nova_compute[185513]: 2026-01-06 16:00:49.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:00:52 compute-0 nova_compute[185513]: 2026-01-06 16:00:52.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:00:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:00:53.713 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:00:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:00:53.715 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:00:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:00:53.715 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:00:55 compute-0 nova_compute[185513]: 2026-01-06 16:00:55.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:00:55 compute-0 podman[247157]: 2026-01-06 16:00:55.875741509 +0000 UTC m=+0.128127343 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 06 16:00:56 compute-0 nova_compute[185513]: 2026-01-06 16:00:56.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:00:56 compute-0 nova_compute[185513]: 2026-01-06 16:00:56.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 16:00:57 compute-0 nova_compute[185513]: 2026-01-06 16:00:57.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:00:57 compute-0 podman[247176]: 2026-01-06 16:00:57.899989077 +0000 UTC m=+0.162951422 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 06 16:00:58 compute-0 nova_compute[185513]: 2026-01-06 16:00:58.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:00:58 compute-0 nova_compute[185513]: 2026-01-06 16:00:58.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 16:00:58 compute-0 nova_compute[185513]: 2026-01-06 16:00:58.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 16:00:58 compute-0 nova_compute[185513]: 2026-01-06 16:00:58.038 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 16:00:59 compute-0 nova_compute[185513]: 2026-01-06 16:00:59.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:00:59 compute-0 nova_compute[185513]: 2026-01-06 16:00:59.059 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:00:59 compute-0 nova_compute[185513]: 2026-01-06 16:00:59.060 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:00:59 compute-0 nova_compute[185513]: 2026-01-06 16:00:59.061 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:00:59 compute-0 nova_compute[185513]: 2026-01-06 16:00:59.061 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 16:00:59 compute-0 nova_compute[185513]: 2026-01-06 16:00:59.460 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:00:59 compute-0 nova_compute[185513]: 2026-01-06 16:00:59.461 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5709MB free_disk=72.47930526733398GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 16:00:59 compute-0 nova_compute[185513]: 2026-01-06 16:00:59.462 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:00:59 compute-0 nova_compute[185513]: 2026-01-06 16:00:59.462 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:00:59 compute-0 nova_compute[185513]: 2026-01-06 16:00:59.570 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 16:00:59 compute-0 nova_compute[185513]: 2026-01-06 16:00:59.571 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 16:00:59 compute-0 nova_compute[185513]: 2026-01-06 16:00:59.633 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:00:59 compute-0 nova_compute[185513]: 2026-01-06 16:00:59.653 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:00:59 compute-0 nova_compute[185513]: 2026-01-06 16:00:59.655 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 16:00:59 compute-0 nova_compute[185513]: 2026-01-06 16:00:59.655 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:00:59 compute-0 podman[201918]: time="2026-01-06T16:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:00:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:00:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3897 "" "Go-http-client/1.1"
Jan 06 16:00:59 compute-0 podman[247196]: 2026-01-06 16:00:59.830638904 +0000 UTC m=+0.099624266 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=openstack_network_exporter, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7)
Jan 06 16:01:01 compute-0 openstack_network_exporter[205258]: ERROR   16:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:01:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:01:01 compute-0 openstack_network_exporter[205258]: ERROR   16:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:01:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:01:01 compute-0 CROND[247217]: (root) CMD (run-parts /etc/cron.hourly)
Jan 06 16:01:01 compute-0 run-parts[247220]: (/etc/cron.hourly) starting 0anacron
Jan 06 16:01:01 compute-0 run-parts[247226]: (/etc/cron.hourly) finished 0anacron
Jan 06 16:01:01 compute-0 CROND[247216]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 06 16:01:05 compute-0 podman[247227]: 2026-01-06 16:01:05.863893302 +0000 UTC m=+0.120467314 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 06 16:01:05 compute-0 podman[247228]: 2026-01-06 16:01:05.871279413 +0000 UTC m=+0.116864586 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 16:01:06 compute-0 nova_compute[185513]: 2026-01-06 16:01:06.656 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:01:08 compute-0 podman[247270]: 2026-01-06 16:01:08.899616707 +0000 UTC m=+0.159500337 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container)
Jan 06 16:01:10 compute-0 nova_compute[185513]: 2026-01-06 16:01:10.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:01:13 compute-0 podman[247290]: 2026-01-06 16:01:13.867934892 +0000 UTC m=+0.142717651 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 06 16:01:13 compute-0 podman[247291]: 2026-01-06 16:01:13.876451044 +0000 UTC m=+0.136928283 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 16:01:26 compute-0 podman[247339]: 2026-01-06 16:01:26.846820683 +0000 UTC m=+0.107507131 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 16:01:28 compute-0 podman[247357]: 2026-01-06 16:01:28.842644627 +0000 UTC m=+0.106638288 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 06 16:01:29 compute-0 podman[201918]: time="2026-01-06T16:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:01:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:01:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3894 "" "Go-http-client/1.1"
Jan 06 16:01:30 compute-0 podman[247375]: 2026-01-06 16:01:30.862497514 +0000 UTC m=+0.118649364 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 06 16:01:31 compute-0 openstack_network_exporter[205258]: ERROR   16:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:01:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:01:31 compute-0 openstack_network_exporter[205258]: ERROR   16:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:01:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.081 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.082 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.089 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.089 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.090 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.092 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:01:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:01:36 compute-0 podman[247395]: 2026-01-06 16:01:36.862072925 +0000 UTC m=+0.105180287 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 16:01:36 compute-0 podman[247394]: 2026-01-06 16:01:36.864674366 +0000 UTC m=+0.116378963 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 06 16:01:39 compute-0 podman[247433]: 2026-01-06 16:01:39.866849256 +0000 UTC m=+0.129099220 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, release-0.7.12=, release=1214.1726694543, version=9.4, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 06 16:01:44 compute-0 podman[247453]: 2026-01-06 16:01:44.771724873 +0000 UTC m=+0.091221837 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 16:01:44 compute-0 podman[247452]: 2026-01-06 16:01:44.819769722 +0000 UTC m=+0.137885749 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 06 16:01:47 compute-0 nova_compute[185513]: 2026-01-06 16:01:47.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:01:51 compute-0 nova_compute[185513]: 2026-01-06 16:01:51.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:01:53 compute-0 nova_compute[185513]: 2026-01-06 16:01:53.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:01:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:01:53.715 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:01:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:01:53.716 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:01:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:01:53.716 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:01:57 compute-0 nova_compute[185513]: 2026-01-06 16:01:57.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:01:57 compute-0 nova_compute[185513]: 2026-01-06 16:01:57.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:01:57 compute-0 nova_compute[185513]: 2026-01-06 16:01:57.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:01:57 compute-0 nova_compute[185513]: 2026-01-06 16:01:57.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 16:01:57 compute-0 podman[247500]: 2026-01-06 16:01:57.839871298 +0000 UTC m=+0.107262384 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 16:01:58 compute-0 nova_compute[185513]: 2026-01-06 16:01:58.026 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:01:58 compute-0 nova_compute[185513]: 2026-01-06 16:01:58.027 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 16:01:58 compute-0 nova_compute[185513]: 2026-01-06 16:01:58.027 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 16:01:58 compute-0 nova_compute[185513]: 2026-01-06 16:01:58.048 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 16:01:59 compute-0 podman[201918]: time="2026-01-06T16:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:01:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:01:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3900 "" "Go-http-client/1.1"
Jan 06 16:01:59 compute-0 podman[247518]: 2026-01-06 16:01:59.858938485 +0000 UTC m=+0.120277509 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute)
Jan 06 16:02:01 compute-0 nova_compute[185513]: 2026-01-06 16:02:01.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:02:01 compute-0 nova_compute[185513]: 2026-01-06 16:02:01.064 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:02:01 compute-0 nova_compute[185513]: 2026-01-06 16:02:01.065 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:02:01 compute-0 nova_compute[185513]: 2026-01-06 16:02:01.065 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:02:01 compute-0 nova_compute[185513]: 2026-01-06 16:02:01.066 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 16:02:01 compute-0 openstack_network_exporter[205258]: ERROR   16:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:02:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:02:01 compute-0 openstack_network_exporter[205258]: ERROR   16:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:02:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:02:01 compute-0 nova_compute[185513]: 2026-01-06 16:02:01.676 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:02:01 compute-0 nova_compute[185513]: 2026-01-06 16:02:01.679 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5699MB free_disk=72.47883605957031GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 16:02:01 compute-0 nova_compute[185513]: 2026-01-06 16:02:01.680 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:02:01 compute-0 nova_compute[185513]: 2026-01-06 16:02:01.681 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:02:01 compute-0 nova_compute[185513]: 2026-01-06 16:02:01.780 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 16:02:01 compute-0 nova_compute[185513]: 2026-01-06 16:02:01.780 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 16:02:01 compute-0 nova_compute[185513]: 2026-01-06 16:02:01.854 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:02:01 compute-0 nova_compute[185513]: 2026-01-06 16:02:01.871 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:02:01 compute-0 nova_compute[185513]: 2026-01-06 16:02:01.874 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 16:02:01 compute-0 nova_compute[185513]: 2026-01-06 16:02:01.874 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:02:01 compute-0 podman[247537]: 2026-01-06 16:02:01.879651447 +0000 UTC m=+0.139439802 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., config_id=openstack_network_exporter, io.buildah.version=1.33.7, container_name=openstack_network_exporter, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350)
Jan 06 16:02:06 compute-0 nova_compute[185513]: 2026-01-06 16:02:06.875 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:02:07 compute-0 podman[247557]: 2026-01-06 16:02:07.83788465 +0000 UTC m=+0.103857682 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 16:02:07 compute-0 podman[247558]: 2026-01-06 16:02:07.872951545 +0000 UTC m=+0.125419579 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 16:02:10 compute-0 podman[247600]: 2026-01-06 16:02:10.896820537 +0000 UTC m=+0.153039962 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, distribution-scope=public, version=9.4, name=ubi9, release-0.7.12=, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 16:02:15 compute-0 podman[247621]: 2026-01-06 16:02:15.862615123 +0000 UTC m=+0.113325770 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 16:02:15 compute-0 podman[247620]: 2026-01-06 16:02:15.919231246 +0000 UTC m=+0.169906392 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 06 16:02:28 compute-0 podman[247666]: 2026-01-06 16:02:28.854475488 +0000 UTC m=+0.106404551 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 06 16:02:29 compute-0 podman[201918]: time="2026-01-06T16:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:02:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:02:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3896 "" "Go-http-client/1.1"
Jan 06 16:02:30 compute-0 podman[247683]: 2026-01-06 16:02:30.866274658 +0000 UTC m=+0.120627229 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 06 16:02:31 compute-0 openstack_network_exporter[205258]: ERROR   16:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:02:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:02:31 compute-0 openstack_network_exporter[205258]: ERROR   16:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:02:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:02:32 compute-0 podman[247702]: 2026-01-06 16:02:32.853536247 +0000 UTC m=+0.110573294 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, container_name=openstack_network_exporter, distribution-scope=public, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., release=1755695350)
Jan 06 16:02:38 compute-0 podman[247720]: 2026-01-06 16:02:38.84364272 +0000 UTC m=+0.108125628 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 06 16:02:38 compute-0 podman[247721]: 2026-01-06 16:02:38.862487564 +0000 UTC m=+0.109084164 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 06 16:02:41 compute-0 podman[247764]: 2026-01-06 16:02:41.871435849 +0000 UTC m=+0.130365544 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.expose-services=, version=9.4, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, config_id=kepler)
Jan 06 16:02:46 compute-0 podman[247784]: 2026-01-06 16:02:46.877869512 +0000 UTC m=+0.125473801 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 06 16:02:46 compute-0 podman[247783]: 2026-01-06 16:02:46.945192467 +0000 UTC m=+0.202751557 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 06 16:02:47 compute-0 nova_compute[185513]: 2026-01-06 16:02:47.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:02:52 compute-0 nova_compute[185513]: 2026-01-06 16:02:52.019 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:02:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:02:53.717 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:02:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:02:53.718 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:02:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:02:53.718 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:02:55 compute-0 nova_compute[185513]: 2026-01-06 16:02:55.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:02:57 compute-0 nova_compute[185513]: 2026-01-06 16:02:57.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:02:57 compute-0 nova_compute[185513]: 2026-01-06 16:02:57.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 16:02:58 compute-0 nova_compute[185513]: 2026-01-06 16:02:58.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:02:58 compute-0 nova_compute[185513]: 2026-01-06 16:02:58.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 16:02:58 compute-0 nova_compute[185513]: 2026-01-06 16:02:58.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 16:02:58 compute-0 nova_compute[185513]: 2026-01-06 16:02:58.043 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 16:02:58 compute-0 nova_compute[185513]: 2026-01-06 16:02:58.043 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:02:59 compute-0 nova_compute[185513]: 2026-01-06 16:02:59.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:02:59 compute-0 podman[201918]: time="2026-01-06T16:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:02:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:02:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3894 "" "Go-http-client/1.1"
Jan 06 16:02:59 compute-0 podman[247833]: 2026-01-06 16:02:59.856569728 +0000 UTC m=+0.111958201 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 16:03:01 compute-0 openstack_network_exporter[205258]: ERROR   16:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:03:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:03:01 compute-0 openstack_network_exporter[205258]: ERROR   16:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:03:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:03:01 compute-0 podman[247851]: 2026-01-06 16:03:01.846876392 +0000 UTC m=+0.113219226 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224)
Jan 06 16:03:02 compute-0 nova_compute[185513]: 2026-01-06 16:03:02.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:03:02 compute-0 nova_compute[185513]: 2026-01-06 16:03:02.434 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:03:02 compute-0 nova_compute[185513]: 2026-01-06 16:03:02.435 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:03:02 compute-0 nova_compute[185513]: 2026-01-06 16:03:02.435 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:03:02 compute-0 nova_compute[185513]: 2026-01-06 16:03:02.435 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 16:03:03 compute-0 nova_compute[185513]: 2026-01-06 16:03:03.024 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:03:03 compute-0 nova_compute[185513]: 2026-01-06 16:03:03.027 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5697MB free_disk=72.47869491577148GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 16:03:03 compute-0 nova_compute[185513]: 2026-01-06 16:03:03.027 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:03:03 compute-0 nova_compute[185513]: 2026-01-06 16:03:03.028 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:03:03 compute-0 nova_compute[185513]: 2026-01-06 16:03:03.120 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 16:03:03 compute-0 nova_compute[185513]: 2026-01-06 16:03:03.121 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 16:03:03 compute-0 nova_compute[185513]: 2026-01-06 16:03:03.151 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:03:03 compute-0 nova_compute[185513]: 2026-01-06 16:03:03.174 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:03:03 compute-0 nova_compute[185513]: 2026-01-06 16:03:03.177 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 16:03:03 compute-0 nova_compute[185513]: 2026-01-06 16:03:03.177 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:03:03 compute-0 podman[247870]: 2026-01-06 16:03:03.892430531 +0000 UTC m=+0.147351207 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6)
Jan 06 16:03:08 compute-0 nova_compute[185513]: 2026-01-06 16:03:08.178 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:03:09 compute-0 podman[247892]: 2026-01-06 16:03:09.853694637 +0000 UTC m=+0.102875955 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 16:03:09 compute-0 podman[247891]: 2026-01-06 16:03:09.880122257 +0000 UTC m=+0.136018458 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 16:03:10 compute-0 nova_compute[185513]: 2026-01-06 16:03:10.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:03:12 compute-0 podman[247933]: 2026-01-06 16:03:12.894970953 +0000 UTC m=+0.154927494 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.29.0, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container)
Jan 06 16:03:17 compute-0 podman[247954]: 2026-01-06 16:03:17.882064979 +0000 UTC m=+0.126193846 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 06 16:03:17 compute-0 podman[247953]: 2026-01-06 16:03:17.969930459 +0000 UTC m=+0.220642612 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 06 16:03:29 compute-0 podman[201918]: time="2026-01-06T16:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:03:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:03:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3894 "" "Go-http-client/1.1"
Jan 06 16:03:30 compute-0 podman[248002]: 2026-01-06 16:03:30.857413149 +0000 UTC m=+0.123412072 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Jan 06 16:03:31 compute-0 openstack_network_exporter[205258]: ERROR   16:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:03:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:03:31 compute-0 openstack_network_exporter[205258]: ERROR   16:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:03:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:03:32 compute-0 podman[248021]: 2026-01-06 16:03:32.869557812 +0000 UTC m=+0.126437622 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.082 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.083 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.084 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.089 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.088 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.090 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.090 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.089 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.091 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.091 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.092 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.092 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.111 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:03:33.112 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:03:34 compute-0 podman[248041]: 2026-01-06 16:03:34.914118143 +0000 UTC m=+0.169362351 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6)
Jan 06 16:03:40 compute-0 podman[248063]: 2026-01-06 16:03:40.845475034 +0000 UTC m=+0.101269470 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 06 16:03:40 compute-0 podman[248062]: 2026-01-06 16:03:40.889295586 +0000 UTC m=+0.138349822 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 06 16:03:43 compute-0 podman[248107]: 2026-01-06 16:03:43.866800907 +0000 UTC m=+0.125183748 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, com.redhat.component=ubi9-container, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, distribution-scope=public, io.openshift.expose-services=, release=1214.1726694543, architecture=x86_64)
Jan 06 16:03:47 compute-0 nova_compute[185513]: 2026-01-06 16:03:47.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:03:48 compute-0 podman[248127]: 2026-01-06 16:03:48.872507513 +0000 UTC m=+0.117252757 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 16:03:48 compute-0 podman[248126]: 2026-01-06 16:03:48.970661678 +0000 UTC m=+0.225242745 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 06 16:03:52 compute-0 nova_compute[185513]: 2026-01-06 16:03:52.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:03:52 compute-0 nova_compute[185513]: 2026-01-06 16:03:52.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 06 16:03:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:03:53.719 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:03:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:03:53.721 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:03:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:03:53.721 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:03:54 compute-0 nova_compute[185513]: 2026-01-06 16:03:54.112 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:03:57 compute-0 nova_compute[185513]: 2026-01-06 16:03:57.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:03:57 compute-0 nova_compute[185513]: 2026-01-06 16:03:57.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:03:57 compute-0 nova_compute[185513]: 2026-01-06 16:03:57.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 16:03:58 compute-0 nova_compute[185513]: 2026-01-06 16:03:58.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:03:59 compute-0 nova_compute[185513]: 2026-01-06 16:03:59.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:03:59 compute-0 nova_compute[185513]: 2026-01-06 16:03:59.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 16:03:59 compute-0 nova_compute[185513]: 2026-01-06 16:03:59.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 16:03:59 compute-0 nova_compute[185513]: 2026-01-06 16:03:59.049 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 16:03:59 compute-0 nova_compute[185513]: 2026-01-06 16:03:59.050 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:03:59 compute-0 podman[201918]: time="2026-01-06T16:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:03:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:03:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3897 "" "Go-http-client/1.1"
Jan 06 16:04:01 compute-0 nova_compute[185513]: 2026-01-06 16:04:01.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:04:01 compute-0 nova_compute[185513]: 2026-01-06 16:04:01.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 06 16:04:01 compute-0 nova_compute[185513]: 2026-01-06 16:04:01.045 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 06 16:04:01 compute-0 openstack_network_exporter[205258]: ERROR   16:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:04:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:04:01 compute-0 openstack_network_exporter[205258]: ERROR   16:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:04:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:04:01 compute-0 podman[248176]: 2026-01-06 16:04:01.867761235 +0000 UTC m=+0.129303709 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Jan 06 16:04:02 compute-0 nova_compute[185513]: 2026-01-06 16:04:02.046 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:04:02 compute-0 nova_compute[185513]: 2026-01-06 16:04:02.118 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:04:02 compute-0 nova_compute[185513]: 2026-01-06 16:04:02.119 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:04:02 compute-0 nova_compute[185513]: 2026-01-06 16:04:02.119 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:04:02 compute-0 nova_compute[185513]: 2026-01-06 16:04:02.120 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 16:04:02 compute-0 nova_compute[185513]: 2026-01-06 16:04:02.565 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:04:02 compute-0 nova_compute[185513]: 2026-01-06 16:04:02.566 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5695MB free_disk=72.47869491577148GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 16:04:02 compute-0 nova_compute[185513]: 2026-01-06 16:04:02.567 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:04:02 compute-0 nova_compute[185513]: 2026-01-06 16:04:02.567 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:04:02 compute-0 nova_compute[185513]: 2026-01-06 16:04:02.714 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 16:04:02 compute-0 nova_compute[185513]: 2026-01-06 16:04:02.715 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 16:04:02 compute-0 nova_compute[185513]: 2026-01-06 16:04:02.831 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing inventories for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 06 16:04:02 compute-0 nova_compute[185513]: 2026-01-06 16:04:02.913 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating ProviderTree inventory for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 06 16:04:02 compute-0 nova_compute[185513]: 2026-01-06 16:04:02.914 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating inventory in ProviderTree for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 06 16:04:02 compute-0 nova_compute[185513]: 2026-01-06 16:04:02.931 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing aggregate associations for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 06 16:04:02 compute-0 nova_compute[185513]: 2026-01-06 16:04:02.956 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing trait associations for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE41,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 06 16:04:02 compute-0 nova_compute[185513]: 2026-01-06 16:04:02.985 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:04:03 compute-0 nova_compute[185513]: 2026-01-06 16:04:03.006 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:04:03 compute-0 nova_compute[185513]: 2026-01-06 16:04:03.008 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 16:04:03 compute-0 nova_compute[185513]: 2026-01-06 16:04:03.009 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.442s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:04:03 compute-0 podman[248194]: 2026-01-06 16:04:03.801937425 +0000 UTC m=+0.075747737 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 06 16:04:05 compute-0 podman[248213]: 2026-01-06 16:04:05.861519277 +0000 UTC m=+0.115920351 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Jan 06 16:04:09 compute-0 nova_compute[185513]: 2026-01-06 16:04:09.986 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:04:10 compute-0 nova_compute[185513]: 2026-01-06 16:04:10.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:04:11 compute-0 podman[248235]: 2026-01-06 16:04:11.871812889 +0000 UTC m=+0.113468325 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 16:04:11 compute-0 podman[248234]: 2026-01-06 16:04:11.89277046 +0000 UTC m=+0.138717021 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 06 16:04:14 compute-0 podman[248275]: 2026-01-06 16:04:14.812648061 +0000 UTC m=+0.134800676 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release=1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, io.buildah.version=1.29.0, version=9.4, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vendor=Red Hat, Inc., name=ubi9, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30)
Jan 06 16:04:19 compute-0 podman[248296]: 2026-01-06 16:04:19.872993236 +0000 UTC m=+0.119355353 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 16:04:19 compute-0 podman[248295]: 2026-01-06 16:04:19.94641201 +0000 UTC m=+0.203670969 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller)
Jan 06 16:04:29 compute-0 podman[201918]: time="2026-01-06T16:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:04:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:04:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3892 "" "Go-http-client/1.1"
Jan 06 16:04:31 compute-0 openstack_network_exporter[205258]: ERROR   16:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:04:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:04:31 compute-0 openstack_network_exporter[205258]: ERROR   16:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:04:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:04:32 compute-0 podman[248344]: 2026-01-06 16:04:32.856353168 +0000 UTC m=+0.107919386 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 06 16:04:34 compute-0 podman[248362]: 2026-01-06 16:04:34.864259029 +0000 UTC m=+0.123267248 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 06 16:04:36 compute-0 podman[248382]: 2026-01-06 16:04:36.860727093 +0000 UTC m=+0.117120873 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-type=git, version=9.6, container_name=openstack_network_exporter, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 16:04:42 compute-0 podman[248402]: 2026-01-06 16:04:42.875227039 +0000 UTC m=+0.112693785 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 06 16:04:42 compute-0 podman[248401]: 2026-01-06 16:04:42.901186033 +0000 UTC m=+0.150500276 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Jan 06 16:04:45 compute-0 podman[248443]: 2026-01-06 16:04:45.889310099 +0000 UTC m=+0.146477108 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., managed_by=edpm_ansible)
Jan 06 16:04:47 compute-0 nova_compute[185513]: 2026-01-06 16:04:47.043 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:04:50 compute-0 podman[248465]: 2026-01-06 16:04:50.860785389 +0000 UTC m=+0.110077634 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 06 16:04:50 compute-0 podman[248464]: 2026-01-06 16:04:50.936293419 +0000 UTC m=+0.196500346 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 16:04:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:04:53.721 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:04:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:04:53.722 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:04:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:04:53.722 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:04:54 compute-0 nova_compute[185513]: 2026-01-06 16:04:54.019 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:04:57 compute-0 nova_compute[185513]: 2026-01-06 16:04:57.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:04:57 compute-0 nova_compute[185513]: 2026-01-06 16:04:57.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:04:57 compute-0 nova_compute[185513]: 2026-01-06 16:04:57.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 16:04:59 compute-0 nova_compute[185513]: 2026-01-06 16:04:59.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:04:59 compute-0 nova_compute[185513]: 2026-01-06 16:04:59.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:04:59 compute-0 podman[201918]: time="2026-01-06T16:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:04:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:04:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3895 "" "Go-http-client/1.1"
Jan 06 16:05:01 compute-0 nova_compute[185513]: 2026-01-06 16:05:01.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:05:01 compute-0 nova_compute[185513]: 2026-01-06 16:05:01.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 16:05:01 compute-0 nova_compute[185513]: 2026-01-06 16:05:01.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 16:05:01 compute-0 nova_compute[185513]: 2026-01-06 16:05:01.042 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 16:05:01 compute-0 openstack_network_exporter[205258]: ERROR   16:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:05:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:05:01 compute-0 openstack_network_exporter[205258]: ERROR   16:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:05:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:05:02 compute-0 nova_compute[185513]: 2026-01-06 16:05:02.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:05:02 compute-0 nova_compute[185513]: 2026-01-06 16:05:02.074 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:05:02 compute-0 nova_compute[185513]: 2026-01-06 16:05:02.075 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:05:02 compute-0 nova_compute[185513]: 2026-01-06 16:05:02.076 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:05:02 compute-0 nova_compute[185513]: 2026-01-06 16:05:02.077 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 16:05:02 compute-0 nova_compute[185513]: 2026-01-06 16:05:02.700 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:05:02 compute-0 nova_compute[185513]: 2026-01-06 16:05:02.702 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5695MB free_disk=72.47869491577148GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 16:05:02 compute-0 nova_compute[185513]: 2026-01-06 16:05:02.703 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:05:02 compute-0 nova_compute[185513]: 2026-01-06 16:05:02.703 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:05:02 compute-0 nova_compute[185513]: 2026-01-06 16:05:02.789 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 16:05:02 compute-0 nova_compute[185513]: 2026-01-06 16:05:02.790 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 16:05:02 compute-0 nova_compute[185513]: 2026-01-06 16:05:02.859 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:05:02 compute-0 nova_compute[185513]: 2026-01-06 16:05:02.894 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:05:02 compute-0 nova_compute[185513]: 2026-01-06 16:05:02.897 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 16:05:02 compute-0 nova_compute[185513]: 2026-01-06 16:05:02.898 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:05:03 compute-0 podman[248511]: 2026-01-06 16:05:03.885589253 +0000 UTC m=+0.142489772 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 06 16:05:05 compute-0 podman[248530]: 2026-01-06 16:05:05.876394185 +0000 UTC m=+0.131979340 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute)
Jan 06 16:05:07 compute-0 podman[248552]: 2026-01-06 16:05:07.836340874 +0000 UTC m=+0.106499159 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 06 16:05:10 compute-0 nova_compute[185513]: 2026-01-06 16:05:10.894 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:05:11 compute-0 nova_compute[185513]: 2026-01-06 16:05:11.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:05:13 compute-0 podman[248575]: 2026-01-06 16:05:13.867576755 +0000 UTC m=+0.115257593 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 06 16:05:13 compute-0 podman[248574]: 2026-01-06 16:05:13.872076966 +0000 UTC m=+0.124582313 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 06 16:05:16 compute-0 podman[248618]: 2026-01-06 16:05:16.865112344 +0000 UTC m=+0.124541792 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, config_id=kepler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, managed_by=edpm_ansible)
Jan 06 16:05:21 compute-0 podman[248641]: 2026-01-06 16:05:21.840695223 +0000 UTC m=+0.098328950 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 06 16:05:21 compute-0 podman[248640]: 2026-01-06 16:05:21.940969965 +0000 UTC m=+0.192859439 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 06 16:05:29 compute-0 podman[201918]: time="2026-01-06T16:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:05:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:05:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3895 "" "Go-http-client/1.1"
Jan 06 16:05:31 compute-0 openstack_network_exporter[205258]: ERROR   16:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:05:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:05:31 compute-0 openstack_network_exporter[205258]: ERROR   16:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:05:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.084 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.085 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.089 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.090 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.090 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.091 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.112 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.122 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.125 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.126 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.127 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.128 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.128 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.129 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.129 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.130 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.130 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.131 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.132 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.133 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.133 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.134 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.135 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.137 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.137 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.137 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.137 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.137 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.137 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.138 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.138 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.138 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.138 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.138 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.139 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.140 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.140 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.140 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.140 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.141 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:05:33.141 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:05:34 compute-0 podman[248692]: 2026-01-06 16:05:34.883812176 +0000 UTC m=+0.138818544 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 06 16:05:36 compute-0 podman[248711]: 2026-01-06 16:05:36.901226702 +0000 UTC m=+0.157131064 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 06 16:05:38 compute-0 podman[248730]: 2026-01-06 16:05:38.88210726 +0000 UTC m=+0.147410844 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, version=9.6, architecture=x86_64, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 16:05:44 compute-0 podman[248751]: 2026-01-06 16:05:44.855463894 +0000 UTC m=+0.139498172 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 16:05:44 compute-0 podman[248752]: 2026-01-06 16:05:44.876220439 +0000 UTC m=+0.156324472 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 16:05:47 compute-0 podman[248794]: 2026-01-06 16:05:47.87496787 +0000 UTC m=+0.129925146 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, io.openshift.expose-services=, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, distribution-scope=public, io.openshift.tags=base rhel9)
Jan 06 16:05:48 compute-0 nova_compute[185513]: 2026-01-06 16:05:48.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:05:52 compute-0 podman[248815]: 2026-01-06 16:05:52.901706377 +0000 UTC m=+0.141296330 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 16:05:52 compute-0 podman[248814]: 2026-01-06 16:05:52.934078013 +0000 UTC m=+0.179772939 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 06 16:05:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:05:53.723 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:05:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:05:53.724 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:05:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:05:53.725 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:05:54 compute-0 nova_compute[185513]: 2026-01-06 16:05:54.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:05:57 compute-0 nova_compute[185513]: 2026-01-06 16:05:57.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:05:59 compute-0 nova_compute[185513]: 2026-01-06 16:05:59.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:05:59 compute-0 nova_compute[185513]: 2026-01-06 16:05:59.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:05:59 compute-0 nova_compute[185513]: 2026-01-06 16:05:59.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:05:59 compute-0 nova_compute[185513]: 2026-01-06 16:05:59.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 16:05:59 compute-0 podman[201918]: time="2026-01-06T16:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:05:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:05:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3899 "" "Go-http-client/1.1"
Jan 06 16:06:01 compute-0 openstack_network_exporter[205258]: ERROR   16:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:06:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:06:01 compute-0 openstack_network_exporter[205258]: ERROR   16:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:06:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:06:02 compute-0 nova_compute[185513]: 2026-01-06 16:06:02.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:06:02 compute-0 nova_compute[185513]: 2026-01-06 16:06:02.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 16:06:02 compute-0 nova_compute[185513]: 2026-01-06 16:06:02.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 16:06:02 compute-0 nova_compute[185513]: 2026-01-06 16:06:02.045 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 16:06:03 compute-0 nova_compute[185513]: 2026-01-06 16:06:03.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:06:03 compute-0 nova_compute[185513]: 2026-01-06 16:06:03.050 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:06:03 compute-0 nova_compute[185513]: 2026-01-06 16:06:03.051 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:06:03 compute-0 nova_compute[185513]: 2026-01-06 16:06:03.051 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:06:03 compute-0 nova_compute[185513]: 2026-01-06 16:06:03.052 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 16:06:03 compute-0 nova_compute[185513]: 2026-01-06 16:06:03.501 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:06:03 compute-0 nova_compute[185513]: 2026-01-06 16:06:03.502 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5677MB free_disk=72.47796249389648GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 16:06:03 compute-0 nova_compute[185513]: 2026-01-06 16:06:03.502 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:06:03 compute-0 nova_compute[185513]: 2026-01-06 16:06:03.503 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:06:03 compute-0 nova_compute[185513]: 2026-01-06 16:06:03.568 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 16:06:03 compute-0 nova_compute[185513]: 2026-01-06 16:06:03.569 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 16:06:03 compute-0 nova_compute[185513]: 2026-01-06 16:06:03.598 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:06:03 compute-0 nova_compute[185513]: 2026-01-06 16:06:03.618 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:06:03 compute-0 nova_compute[185513]: 2026-01-06 16:06:03.621 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 16:06:03 compute-0 nova_compute[185513]: 2026-01-06 16:06:03.622 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:06:05 compute-0 podman[248861]: 2026-01-06 16:06:05.887690931 +0000 UTC m=+0.141730552 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 16:06:07 compute-0 podman[248879]: 2026-01-06 16:06:07.856729951 +0000 UTC m=+0.115785228 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 06 16:06:09 compute-0 podman[248898]: 2026-01-06 16:06:09.871856053 +0000 UTC m=+0.128995701 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, config_id=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6)
Jan 06 16:06:13 compute-0 nova_compute[185513]: 2026-01-06 16:06:13.626 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:06:15 compute-0 podman[248919]: 2026-01-06 16:06:15.834742639 +0000 UTC m=+0.094027356 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 16:06:15 compute-0 podman[248918]: 2026-01-06 16:06:15.848793695 +0000 UTC m=+0.114157624 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 16:06:18 compute-0 podman[248958]: 2026-01-06 16:06:18.877736363 +0000 UTC m=+0.144146846 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=kepler, io.openshift.expose-services=, release-0.7.12=, vcs-type=git, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, build-date=2024-09-18T21:23:30)
Jan 06 16:06:23 compute-0 podman[248979]: 2026-01-06 16:06:23.839673948 +0000 UTC m=+0.100556510 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 16:06:23 compute-0 podman[248978]: 2026-01-06 16:06:23.883705895 +0000 UTC m=+0.147905955 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 06 16:06:29 compute-0 podman[201918]: time="2026-01-06T16:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:06:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:06:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3892 "" "Go-http-client/1.1"
Jan 06 16:06:31 compute-0 openstack_network_exporter[205258]: ERROR   16:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:06:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:06:31 compute-0 openstack_network_exporter[205258]: ERROR   16:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:06:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:06:36 compute-0 podman[249029]: 2026-01-06 16:06:36.844201047 +0000 UTC m=+0.100197101 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 16:06:38 compute-0 podman[249047]: 2026-01-06 16:06:38.871561686 +0000 UTC m=+0.128005144 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 06 16:06:40 compute-0 podman[249068]: 2026-01-06 16:06:40.862566445 +0000 UTC m=+0.116278670 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, distribution-scope=public, version=9.6, architecture=x86_64, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 06 16:06:46 compute-0 podman[249090]: 2026-01-06 16:06:46.865773919 +0000 UTC m=+0.117199245 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 06 16:06:46 compute-0 podman[249089]: 2026-01-06 16:06:46.886079572 +0000 UTC m=+0.145341168 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 06 16:06:48 compute-0 nova_compute[185513]: 2026-01-06 16:06:48.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:06:49 compute-0 podman[249132]: 2026-01-06 16:06:49.885176663 +0000 UTC m=+0.137267513 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, version=9.4, architecture=x86_64, container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.component=ubi9-container, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.buildah.version=1.29.0, config_id=kepler, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 16:06:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:06:53.724 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:06:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:06:53.725 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:06:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:06:53.725 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:06:54 compute-0 podman[249155]: 2026-01-06 16:06:54.879034903 +0000 UTC m=+0.125117937 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 06 16:06:54 compute-0 podman[249154]: 2026-01-06 16:06:54.929683328 +0000 UTC m=+0.185619805 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 06 16:06:55 compute-0 nova_compute[185513]: 2026-01-06 16:06:55.019 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:06:58 compute-0 nova_compute[185513]: 2026-01-06 16:06:58.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:06:59 compute-0 nova_compute[185513]: 2026-01-06 16:06:59.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:06:59 compute-0 podman[201918]: time="2026-01-06T16:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:06:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:06:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3887 "" "Go-http-client/1.1"
Jan 06 16:07:00 compute-0 nova_compute[185513]: 2026-01-06 16:07:00.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:07:01 compute-0 nova_compute[185513]: 2026-01-06 16:07:01.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:07:01 compute-0 nova_compute[185513]: 2026-01-06 16:07:01.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 16:07:01 compute-0 openstack_network_exporter[205258]: ERROR   16:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:07:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:07:01 compute-0 openstack_network_exporter[205258]: ERROR   16:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:07:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:07:02 compute-0 nova_compute[185513]: 2026-01-06 16:07:02.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:07:02 compute-0 nova_compute[185513]: 2026-01-06 16:07:02.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 16:07:02 compute-0 nova_compute[185513]: 2026-01-06 16:07:02.026 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 16:07:02 compute-0 nova_compute[185513]: 2026-01-06 16:07:02.062 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 16:07:05 compute-0 nova_compute[185513]: 2026-01-06 16:07:05.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:07:05 compute-0 nova_compute[185513]: 2026-01-06 16:07:05.072 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:07:05 compute-0 nova_compute[185513]: 2026-01-06 16:07:05.073 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:07:05 compute-0 nova_compute[185513]: 2026-01-06 16:07:05.074 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:07:05 compute-0 nova_compute[185513]: 2026-01-06 16:07:05.074 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 16:07:05 compute-0 nova_compute[185513]: 2026-01-06 16:07:05.636 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:07:05 compute-0 nova_compute[185513]: 2026-01-06 16:07:05.638 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5692MB free_disk=72.47798156738281GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 16:07:05 compute-0 nova_compute[185513]: 2026-01-06 16:07:05.638 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:07:05 compute-0 nova_compute[185513]: 2026-01-06 16:07:05.639 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:07:05 compute-0 nova_compute[185513]: 2026-01-06 16:07:05.712 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 16:07:05 compute-0 nova_compute[185513]: 2026-01-06 16:07:05.713 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 16:07:05 compute-0 nova_compute[185513]: 2026-01-06 16:07:05.745 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:07:05 compute-0 nova_compute[185513]: 2026-01-06 16:07:05.761 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:07:05 compute-0 nova_compute[185513]: 2026-01-06 16:07:05.763 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 16:07:05 compute-0 nova_compute[185513]: 2026-01-06 16:07:05.764 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:07:07 compute-0 podman[249203]: 2026-01-06 16:07:07.872832976 +0000 UTC m=+0.123831543 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 06 16:07:09 compute-0 podman[249222]: 2026-01-06 16:07:09.860505936 +0000 UTC m=+0.115910881 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 06 16:07:11 compute-0 podman[249241]: 2026-01-06 16:07:11.87228713 +0000 UTC m=+0.137893379 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, architecture=x86_64, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350)
Jan 06 16:07:14 compute-0 nova_compute[185513]: 2026-01-06 16:07:14.766 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:07:15 compute-0 nova_compute[185513]: 2026-01-06 16:07:15.019 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:07:17 compute-0 podman[249262]: 2026-01-06 16:07:17.874756251 +0000 UTC m=+0.124342617 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 06 16:07:17 compute-0 podman[249261]: 2026-01-06 16:07:17.875899882 +0000 UTC m=+0.131190780 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 06 16:07:20 compute-0 podman[249307]: 2026-01-06 16:07:20.868651752 +0000 UTC m=+0.125287931 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, distribution-scope=public, vcs-type=git, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., container_name=kepler, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 16:07:25 compute-0 podman[249327]: 2026-01-06 16:07:25.866922468 +0000 UTC m=+0.125396605 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 16:07:25 compute-0 podman[249326]: 2026-01-06 16:07:25.958742593 +0000 UTC m=+0.212891334 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 16:07:29 compute-0 podman[201918]: time="2026-01-06T16:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:07:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:07:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3893 "" "Go-http-client/1.1"
Jan 06 16:07:31 compute-0 openstack_network_exporter[205258]: ERROR   16:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:07:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:07:31 compute-0 openstack_network_exporter[205258]: ERROR   16:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:07:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.085 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.086 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.090 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.090 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.091 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.091 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.091 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.091 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.092 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.092 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.110 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.111 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.111 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.111 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.112 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.112 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.113 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb774e60>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.112 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.120 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.128 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:07:33.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:07:38 compute-0 podman[249379]: 2026-01-06 16:07:38.874994264 +0000 UTC m=+0.132517105 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 06 16:07:40 compute-0 podman[249399]: 2026-01-06 16:07:40.86291858 +0000 UTC m=+0.113839716 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Jan 06 16:07:42 compute-0 podman[249420]: 2026-01-06 16:07:42.86671176 +0000 UTC m=+0.119361173 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, config_id=openstack_network_exporter, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7)
Jan 06 16:07:48 compute-0 podman[249440]: 2026-01-06 16:07:48.839755627 +0000 UTC m=+0.094746315 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 16:07:48 compute-0 podman[249439]: 2026-01-06 16:07:48.845073459 +0000 UTC m=+0.107646520 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Jan 06 16:07:50 compute-0 nova_compute[185513]: 2026-01-06 16:07:50.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:07:51 compute-0 podman[249482]: 2026-01-06 16:07:51.87109017 +0000 UTC m=+0.127276625 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, container_name=kepler, maintainer=Red Hat, Inc., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_id=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 06 16:07:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:07:53.725 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:07:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:07:53.726 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:07:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:07:53.726 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:07:56 compute-0 nova_compute[185513]: 2026-01-06 16:07:56.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:07:56 compute-0 podman[249503]: 2026-01-06 16:07:56.835024567 +0000 UTC m=+0.097219641 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 16:07:56 compute-0 podman[249502]: 2026-01-06 16:07:56.950581877 +0000 UTC m=+0.210691265 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 06 16:07:59 compute-0 nova_compute[185513]: 2026-01-06 16:07:59.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:07:59 compute-0 podman[201918]: time="2026-01-06T16:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:07:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:07:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3899 "" "Go-http-client/1.1"
Jan 06 16:08:01 compute-0 nova_compute[185513]: 2026-01-06 16:08:01.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:08:01 compute-0 nova_compute[185513]: 2026-01-06 16:08:01.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:08:01 compute-0 openstack_network_exporter[205258]: ERROR   16:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:08:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:08:01 compute-0 openstack_network_exporter[205258]: ERROR   16:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:08:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:08:02 compute-0 nova_compute[185513]: 2026-01-06 16:08:02.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:08:02 compute-0 nova_compute[185513]: 2026-01-06 16:08:02.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 16:08:04 compute-0 nova_compute[185513]: 2026-01-06 16:08:04.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:08:04 compute-0 nova_compute[185513]: 2026-01-06 16:08:04.027 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 16:08:04 compute-0 nova_compute[185513]: 2026-01-06 16:08:04.027 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 16:08:04 compute-0 nova_compute[185513]: 2026-01-06 16:08:04.059 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 16:08:05 compute-0 nova_compute[185513]: 2026-01-06 16:08:05.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:08:05 compute-0 nova_compute[185513]: 2026-01-06 16:08:05.072 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:08:05 compute-0 nova_compute[185513]: 2026-01-06 16:08:05.073 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:08:05 compute-0 nova_compute[185513]: 2026-01-06 16:08:05.073 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:08:05 compute-0 nova_compute[185513]: 2026-01-06 16:08:05.074 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 16:08:05 compute-0 nova_compute[185513]: 2026-01-06 16:08:05.699 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:08:05 compute-0 nova_compute[185513]: 2026-01-06 16:08:05.701 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5695MB free_disk=72.47796249389648GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 16:08:05 compute-0 nova_compute[185513]: 2026-01-06 16:08:05.702 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:08:05 compute-0 nova_compute[185513]: 2026-01-06 16:08:05.702 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:08:05 compute-0 nova_compute[185513]: 2026-01-06 16:08:05.814 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 16:08:05 compute-0 nova_compute[185513]: 2026-01-06 16:08:05.815 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 16:08:05 compute-0 nova_compute[185513]: 2026-01-06 16:08:05.851 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:08:05 compute-0 nova_compute[185513]: 2026-01-06 16:08:05.884 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:08:05 compute-0 nova_compute[185513]: 2026-01-06 16:08:05.886 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 16:08:05 compute-0 nova_compute[185513]: 2026-01-06 16:08:05.886 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:08:09 compute-0 podman[249551]: 2026-01-06 16:08:09.880669846 +0000 UTC m=+0.133328457 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 06 16:08:11 compute-0 podman[249570]: 2026-01-06 16:08:11.85322046 +0000 UTC m=+0.120796992 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 06 16:08:13 compute-0 podman[249590]: 2026-01-06 16:08:13.910329648 +0000 UTC m=+0.160281998 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, name=ubi9-minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1755695350, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7)
Jan 06 16:08:16 compute-0 nova_compute[185513]: 2026-01-06 16:08:16.886 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:08:19 compute-0 podman[249612]: 2026-01-06 16:08:19.865856996 +0000 UTC m=+0.114307049 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 16:08:19 compute-0 podman[249611]: 2026-01-06 16:08:19.872255567 +0000 UTC m=+0.130391309 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 06 16:08:22 compute-0 podman[249653]: 2026-01-06 16:08:22.898734838 +0000 UTC m=+0.152871980 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-type=git, version=9.4, container_name=kepler, architecture=x86_64, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, release=1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 16:08:27 compute-0 nova_compute[185513]: 2026-01-06 16:08:27.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:08:27 compute-0 nova_compute[185513]: 2026-01-06 16:08:27.023 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:08:27 compute-0 nova_compute[185513]: 2026-01-06 16:08:27.025 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:08:27 compute-0 nova_compute[185513]: 2026-01-06 16:08:27.026 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:08:27 compute-0 nova_compute[185513]: 2026-01-06 16:08:27.026 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:08:27 compute-0 nova_compute[185513]: 2026-01-06 16:08:27.027 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:08:27 compute-0 nova_compute[185513]: 2026-01-06 16:08:27.028 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:08:27 compute-0 nova_compute[185513]: 2026-01-06 16:08:27.053 185517 DEBUG nova.virt.libvirt.imagecache [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Skipping verification, no base directory at /var/lib/nova/instances/_base _get_base /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:367
Jan 06 16:08:27 compute-0 podman[249674]: 2026-01-06 16:08:27.852771982 +0000 UTC m=+0.110653170 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 16:08:27 compute-0 podman[249673]: 2026-01-06 16:08:27.892463624 +0000 UTC m=+0.151721669 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 06 16:08:29 compute-0 podman[201918]: time="2026-01-06T16:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:08:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:08:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3894 "" "Go-http-client/1.1"
Jan 06 16:08:31 compute-0 openstack_network_exporter[205258]: ERROR   16:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:08:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:08:31 compute-0 openstack_network_exporter[205258]: ERROR   16:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:08:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:08:40 compute-0 podman[249722]: 2026-01-06 16:08:40.883285325 +0000 UTC m=+0.131043826 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 06 16:08:42 compute-0 podman[249741]: 2026-01-06 16:08:42.878688371 +0000 UTC m=+0.132409643 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 06 16:08:44 compute-0 podman[249761]: 2026-01-06 16:08:44.859029723 +0000 UTC m=+0.155979253 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, vcs-type=git, io.buildah.version=1.33.7, config_id=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64)
Jan 06 16:08:50 compute-0 podman[249784]: 2026-01-06 16:08:50.845752945 +0000 UTC m=+0.100242512 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 16:08:50 compute-0 podman[249783]: 2026-01-06 16:08:50.874475323 +0000 UTC m=+0.123757781 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi)
Jan 06 16:08:51 compute-0 nova_compute[185513]: 2026-01-06 16:08:51.053 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:08:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:08:53.726 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:08:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:08:53.727 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:08:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:08:53.727 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:08:53 compute-0 podman[249823]: 2026-01-06 16:08:53.852693216 +0000 UTC m=+0.108719719 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, version=9.4, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 06 16:08:58 compute-0 nova_compute[185513]: 2026-01-06 16:08:58.019 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:08:58 compute-0 podman[249844]: 2026-01-06 16:08:58.870867314 +0000 UTC m=+0.116630260 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 16:08:58 compute-0 podman[249843]: 2026-01-06 16:08:58.92192947 +0000 UTC m=+0.186130519 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 16:08:59 compute-0 podman[201918]: time="2026-01-06T16:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:08:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:08:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3902 "" "Go-http-client/1.1"
Jan 06 16:09:00 compute-0 nova_compute[185513]: 2026-01-06 16:09:00.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:09:01 compute-0 openstack_network_exporter[205258]: ERROR   16:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:09:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:09:01 compute-0 openstack_network_exporter[205258]: ERROR   16:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:09:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:09:03 compute-0 nova_compute[185513]: 2026-01-06 16:09:03.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:09:03 compute-0 nova_compute[185513]: 2026-01-06 16:09:03.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:09:03 compute-0 nova_compute[185513]: 2026-01-06 16:09:03.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:09:03 compute-0 nova_compute[185513]: 2026-01-06 16:09:03.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 16:09:04 compute-0 nova_compute[185513]: 2026-01-06 16:09:04.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:09:04 compute-0 nova_compute[185513]: 2026-01-06 16:09:04.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 06 16:09:06 compute-0 nova_compute[185513]: 2026-01-06 16:09:06.047 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:09:06 compute-0 nova_compute[185513]: 2026-01-06 16:09:06.048 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 16:09:06 compute-0 nova_compute[185513]: 2026-01-06 16:09:06.048 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 16:09:06 compute-0 nova_compute[185513]: 2026-01-06 16:09:06.064 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 16:09:07 compute-0 nova_compute[185513]: 2026-01-06 16:09:07.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:09:07 compute-0 nova_compute[185513]: 2026-01-06 16:09:07.059 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:09:07 compute-0 nova_compute[185513]: 2026-01-06 16:09:07.060 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:09:07 compute-0 nova_compute[185513]: 2026-01-06 16:09:07.060 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:09:07 compute-0 nova_compute[185513]: 2026-01-06 16:09:07.060 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 16:09:07 compute-0 nova_compute[185513]: 2026-01-06 16:09:07.578 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:09:07 compute-0 nova_compute[185513]: 2026-01-06 16:09:07.580 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5675MB free_disk=72.47795867919922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 16:09:07 compute-0 nova_compute[185513]: 2026-01-06 16:09:07.580 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:09:07 compute-0 nova_compute[185513]: 2026-01-06 16:09:07.581 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:09:07 compute-0 nova_compute[185513]: 2026-01-06 16:09:07.859 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 16:09:07 compute-0 nova_compute[185513]: 2026-01-06 16:09:07.859 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 16:09:07 compute-0 nova_compute[185513]: 2026-01-06 16:09:07.970 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing inventories for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 06 16:09:08 compute-0 nova_compute[185513]: 2026-01-06 16:09:08.064 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating ProviderTree inventory for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 06 16:09:08 compute-0 nova_compute[185513]: 2026-01-06 16:09:08.065 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating inventory in ProviderTree for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 06 16:09:08 compute-0 nova_compute[185513]: 2026-01-06 16:09:08.081 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing aggregate associations for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 06 16:09:08 compute-0 nova_compute[185513]: 2026-01-06 16:09:08.103 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing trait associations for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE41,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 06 16:09:08 compute-0 nova_compute[185513]: 2026-01-06 16:09:08.134 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:09:08 compute-0 nova_compute[185513]: 2026-01-06 16:09:08.150 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:09:08 compute-0 nova_compute[185513]: 2026-01-06 16:09:08.152 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 16:09:08 compute-0 nova_compute[185513]: 2026-01-06 16:09:08.153 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:09:11 compute-0 podman[249891]: 2026-01-06 16:09:11.871654814 +0000 UTC m=+0.120315749 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Jan 06 16:09:12 compute-0 nova_compute[185513]: 2026-01-06 16:09:12.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:09:12 compute-0 nova_compute[185513]: 2026-01-06 16:09:12.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 06 16:09:12 compute-0 nova_compute[185513]: 2026-01-06 16:09:12.047 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 06 16:09:13 compute-0 podman[249909]: 2026-01-06 16:09:13.870391999 +0000 UTC m=+0.129970207 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 06 16:09:15 compute-0 nova_compute[185513]: 2026-01-06 16:09:15.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:09:15 compute-0 podman[249929]: 2026-01-06 16:09:15.865731833 +0000 UTC m=+0.122946929 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, io.openshift.expose-services=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64)
Jan 06 16:09:17 compute-0 nova_compute[185513]: 2026-01-06 16:09:17.042 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:09:20 compute-0 nova_compute[185513]: 2026-01-06 16:09:20.019 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:09:21 compute-0 podman[249952]: 2026-01-06 16:09:21.878825941 +0000 UTC m=+0.121401918 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 16:09:21 compute-0 podman[249951]: 2026-01-06 16:09:21.883848756 +0000 UTC m=+0.136347928 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 06 16:09:24 compute-0 podman[249993]: 2026-01-06 16:09:24.89881699 +0000 UTC m=+0.154563994 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, managed_by=edpm_ansible, release=1214.1726694543, vendor=Red Hat, Inc., version=9.4, name=ubi9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, container_name=kepler, io.openshift.expose-services=, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 06 16:09:29 compute-0 podman[201918]: time="2026-01-06T16:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:09:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:09:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3897 "" "Go-http-client/1.1"
Jan 06 16:09:29 compute-0 podman[250013]: 2026-01-06 16:09:29.839912957 +0000 UTC m=+0.099540843 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 16:09:29 compute-0 podman[250012]: 2026-01-06 16:09:29.894440116 +0000 UTC m=+0.156086566 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 06 16:09:31 compute-0 openstack_network_exporter[205258]: ERROR   16:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:09:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:09:31 compute-0 openstack_network_exporter[205258]: ERROR   16:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:09:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.089 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.090 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.090 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.092 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca611760>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.110 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.111 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.112 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.113 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.114 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.116 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.122 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.123 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.124 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.124 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.124 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.125 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.125 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.125 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.126 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.127 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.128 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.128 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.128 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.128 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.128 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.128 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.128 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:09:33.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:09:42 compute-0 podman[250061]: 2026-01-06 16:09:42.850404665 +0000 UTC m=+0.109952691 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Jan 06 16:09:44 compute-0 podman[250079]: 2026-01-06 16:09:44.837307894 +0000 UTC m=+0.124460850 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Jan 06 16:09:46 compute-0 podman[250099]: 2026-01-06 16:09:46.863077822 +0000 UTC m=+0.123652678 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, architecture=x86_64, managed_by=edpm_ansible, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Jan 06 16:09:47 compute-0 nova_compute[185513]: 2026-01-06 16:09:47.650 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:09:52 compute-0 podman[250123]: 2026-01-06 16:09:52.88929405 +0000 UTC m=+0.137896659 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 16:09:52 compute-0 podman[250122]: 2026-01-06 16:09:52.923929756 +0000 UTC m=+0.178404422 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 06 16:09:53 compute-0 nova_compute[185513]: 2026-01-06 16:09:53.061 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:09:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:09:53.728 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:09:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:09:53.729 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:09:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:09:53.729 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:09:55 compute-0 podman[250164]: 2026-01-06 16:09:55.845343699 +0000 UTC m=+0.100684054 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, architecture=x86_64, name=ubi9, release-0.7.12=, config_id=kepler, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 16:09:59 compute-0 podman[201918]: time="2026-01-06T16:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:09:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:09:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3899 "" "Go-http-client/1.1"
Jan 06 16:10:00 compute-0 nova_compute[185513]: 2026-01-06 16:10:00.019 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:10:00 compute-0 podman[250185]: 2026-01-06 16:10:00.867100153 +0000 UTC m=+0.114433302 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 16:10:00 compute-0 podman[250184]: 2026-01-06 16:10:00.902616763 +0000 UTC m=+0.158984623 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 16:10:01 compute-0 nova_compute[185513]: 2026-01-06 16:10:01.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:10:01 compute-0 openstack_network_exporter[205258]: ERROR   16:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:10:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:10:01 compute-0 openstack_network_exporter[205258]: ERROR   16:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:10:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:10:01 compute-0 anacron[30942]: Job `cron.weekly' started
Jan 06 16:10:01 compute-0 anacron[30942]: Job `cron.weekly' terminated
Jan 06 16:10:03 compute-0 nova_compute[185513]: 2026-01-06 16:10:03.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:10:03 compute-0 nova_compute[185513]: 2026-01-06 16:10:03.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:10:03 compute-0 nova_compute[185513]: 2026-01-06 16:10:03.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:10:03 compute-0 nova_compute[185513]: 2026-01-06 16:10:03.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 16:10:08 compute-0 nova_compute[185513]: 2026-01-06 16:10:08.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:10:08 compute-0 nova_compute[185513]: 2026-01-06 16:10:08.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 16:10:08 compute-0 nova_compute[185513]: 2026-01-06 16:10:08.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 16:10:08 compute-0 nova_compute[185513]: 2026-01-06 16:10:08.040 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 16:10:08 compute-0 nova_compute[185513]: 2026-01-06 16:10:08.040 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:10:08 compute-0 nova_compute[185513]: 2026-01-06 16:10:08.082 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:10:08 compute-0 nova_compute[185513]: 2026-01-06 16:10:08.082 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:10:08 compute-0 nova_compute[185513]: 2026-01-06 16:10:08.082 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:10:08 compute-0 nova_compute[185513]: 2026-01-06 16:10:08.083 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 16:10:08 compute-0 nova_compute[185513]: 2026-01-06 16:10:08.662 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:10:08 compute-0 nova_compute[185513]: 2026-01-06 16:10:08.663 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5697MB free_disk=72.47797393798828GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 16:10:08 compute-0 nova_compute[185513]: 2026-01-06 16:10:08.663 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:10:08 compute-0 nova_compute[185513]: 2026-01-06 16:10:08.664 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:10:08 compute-0 nova_compute[185513]: 2026-01-06 16:10:08.745 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 16:10:08 compute-0 nova_compute[185513]: 2026-01-06 16:10:08.745 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 16:10:08 compute-0 nova_compute[185513]: 2026-01-06 16:10:08.772 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:10:08 compute-0 nova_compute[185513]: 2026-01-06 16:10:08.789 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:10:08 compute-0 nova_compute[185513]: 2026-01-06 16:10:08.791 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 16:10:08 compute-0 nova_compute[185513]: 2026-01-06 16:10:08.791 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:10:14 compute-0 podman[250236]: 2026-01-06 16:10:14.000655294 +0000 UTC m=+0.118983824 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 06 16:10:15 compute-0 podman[250255]: 2026-01-06 16:10:15.839782251 +0000 UTC m=+0.110368283 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute)
Jan 06 16:10:17 compute-0 podman[250276]: 2026-01-06 16:10:17.890222238 +0000 UTC m=+0.144344801 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, distribution-scope=public, architecture=x86_64, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 16:10:18 compute-0 nova_compute[185513]: 2026-01-06 16:10:18.775 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:10:23 compute-0 podman[250299]: 2026-01-06 16:10:23.845708085 +0000 UTC m=+0.098968818 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 16:10:23 compute-0 podman[250298]: 2026-01-06 16:10:23.875017229 +0000 UTC m=+0.126437713 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 06 16:10:26 compute-0 podman[250340]: 2026-01-06 16:10:26.88689427 +0000 UTC m=+0.142602355 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, version=9.4, distribution-scope=public, name=ubi9, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30)
Jan 06 16:10:29 compute-0 podman[201918]: time="2026-01-06T16:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:10:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:10:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3895 "" "Go-http-client/1.1"
Jan 06 16:10:31 compute-0 openstack_network_exporter[205258]: ERROR   16:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:10:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:10:31 compute-0 openstack_network_exporter[205258]: ERROR   16:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:10:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:10:31 compute-0 podman[250360]: 2026-01-06 16:10:31.880224413 +0000 UTC m=+0.122891858 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 16:10:31 compute-0 podman[250359]: 2026-01-06 16:10:31.952797844 +0000 UTC m=+0.204873361 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 06 16:10:44 compute-0 podman[250409]: 2026-01-06 16:10:44.802392919 +0000 UTC m=+0.106593342 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 06 16:10:46 compute-0 podman[250427]: 2026-01-06 16:10:46.883895908 +0000 UTC m=+0.138790753 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Jan 06 16:10:48 compute-0 podman[250445]: 2026-01-06 16:10:48.893856013 +0000 UTC m=+0.150915227 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, version=9.6, name=ubi9-minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 16:10:53 compute-0 nova_compute[185513]: 2026-01-06 16:10:53.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:10:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:10:53.730 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:10:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:10:53.731 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:10:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:10:53.732 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:10:54 compute-0 podman[250468]: 2026-01-06 16:10:54.8856328 +0000 UTC m=+0.136582484 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 16:10:54 compute-0 podman[250467]: 2026-01-06 16:10:54.890561962 +0000 UTC m=+0.146706655 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 16:10:57 compute-0 podman[250508]: 2026-01-06 16:10:57.864905629 +0000 UTC m=+0.118952092 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-type=git, release-0.7.12=, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 06 16:10:59 compute-0 podman[201918]: time="2026-01-06T16:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:10:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:10:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3899 "" "Go-http-client/1.1"
Jan 06 16:11:00 compute-0 nova_compute[185513]: 2026-01-06 16:11:00.020 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:11:01 compute-0 openstack_network_exporter[205258]: ERROR   16:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:11:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:11:01 compute-0 openstack_network_exporter[205258]: ERROR   16:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:11:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:11:02 compute-0 nova_compute[185513]: 2026-01-06 16:11:02.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:11:02 compute-0 systemd[1]: Starting dnf makecache...
Jan 06 16:11:02 compute-0 podman[250529]: 2026-01-06 16:11:02.880848648 +0000 UTC m=+0.132128785 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 06 16:11:02 compute-0 podman[250528]: 2026-01-06 16:11:02.936100115 +0000 UTC m=+0.194832501 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 06 16:11:02 compute-0 dnf[250530]: Metadata cache refreshed recently.
Jan 06 16:11:03 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 06 16:11:03 compute-0 systemd[1]: Finished dnf makecache.
Jan 06 16:11:03 compute-0 nova_compute[185513]: 2026-01-06 16:11:03.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:11:04 compute-0 nova_compute[185513]: 2026-01-06 16:11:04.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:11:04 compute-0 nova_compute[185513]: 2026-01-06 16:11:04.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 16:11:05 compute-0 nova_compute[185513]: 2026-01-06 16:11:05.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:11:09 compute-0 nova_compute[185513]: 2026-01-06 16:11:09.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:11:09 compute-0 nova_compute[185513]: 2026-01-06 16:11:09.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 16:11:09 compute-0 nova_compute[185513]: 2026-01-06 16:11:09.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 16:11:09 compute-0 nova_compute[185513]: 2026-01-06 16:11:09.049 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 16:11:10 compute-0 nova_compute[185513]: 2026-01-06 16:11:10.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:11:10 compute-0 nova_compute[185513]: 2026-01-06 16:11:10.068 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:11:10 compute-0 nova_compute[185513]: 2026-01-06 16:11:10.069 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:11:10 compute-0 nova_compute[185513]: 2026-01-06 16:11:10.069 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:11:10 compute-0 nova_compute[185513]: 2026-01-06 16:11:10.070 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 16:11:10 compute-0 nova_compute[185513]: 2026-01-06 16:11:10.589 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:11:10 compute-0 nova_compute[185513]: 2026-01-06 16:11:10.592 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5677MB free_disk=72.47797393798828GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 16:11:10 compute-0 nova_compute[185513]: 2026-01-06 16:11:10.592 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:11:10 compute-0 nova_compute[185513]: 2026-01-06 16:11:10.593 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:11:10 compute-0 nova_compute[185513]: 2026-01-06 16:11:10.662 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 16:11:10 compute-0 nova_compute[185513]: 2026-01-06 16:11:10.663 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 16:11:10 compute-0 nova_compute[185513]: 2026-01-06 16:11:10.695 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:11:10 compute-0 nova_compute[185513]: 2026-01-06 16:11:10.713 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:11:10 compute-0 nova_compute[185513]: 2026-01-06 16:11:10.715 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 16:11:10 compute-0 nova_compute[185513]: 2026-01-06 16:11:10.716 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:11:15 compute-0 podman[250578]: 2026-01-06 16:11:15.820207172 +0000 UTC m=+0.087625364 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 06 16:11:17 compute-0 podman[250597]: 2026-01-06 16:11:17.861099044 +0000 UTC m=+0.122117727 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Jan 06 16:11:18 compute-0 nova_compute[185513]: 2026-01-06 16:11:18.718 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:11:19 compute-0 podman[250617]: 2026-01-06 16:11:19.8706899 +0000 UTC m=+0.123118154 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-type=git, distribution-scope=public, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41)
Jan 06 16:11:24 compute-0 nova_compute[185513]: 2026-01-06 16:11:24.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:11:25 compute-0 podman[250638]: 2026-01-06 16:11:25.875512375 +0000 UTC m=+0.128673322 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 06 16:11:25 compute-0 podman[250639]: 2026-01-06 16:11:25.906775561 +0000 UTC m=+0.150913217 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 16:11:28 compute-0 podman[250684]: 2026-01-06 16:11:28.867767981 +0000 UTC m=+0.127952453 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, version=9.4, config_id=kepler, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, name=ubi9, com.redhat.component=ubi9-container, container_name=kepler, vcs-type=git)
Jan 06 16:11:29 compute-0 podman[201918]: time="2026-01-06T16:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:11:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:11:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3901 "" "Go-http-client/1.1"
Jan 06 16:11:31 compute-0 openstack_network_exporter[205258]: ERROR   16:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:11:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:11:31 compute-0 openstack_network_exporter[205258]: ERROR   16:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:11:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.090 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.092 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.092 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.097 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.107 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.110 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.111 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.112 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.116 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.118 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.119 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.120 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.121 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.121 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.122 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.123 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.124 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.124 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.125 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.127 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.127 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.128 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.128 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.128 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.128 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.128 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.126 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.129 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.129 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb759ca0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.129 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.130 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.130 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.130 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.131 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.131 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.131 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.132 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.132 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.132 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.132 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.132 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.132 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.133 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.133 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:11:33.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:11:33 compute-0 podman[250706]: 2026-01-06 16:11:33.810809011 +0000 UTC m=+0.074236176 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 16:11:33 compute-0 podman[250705]: 2026-01-06 16:11:33.901123787 +0000 UTC m=+0.159482616 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 06 16:11:46 compute-0 podman[250755]: 2026-01-06 16:11:46.852632657 +0000 UTC m=+0.107526806 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 06 16:11:48 compute-0 podman[250772]: 2026-01-06 16:11:48.882781624 +0000 UTC m=+0.134896069 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 06 16:11:50 compute-0 podman[250793]: 2026-01-06 16:11:50.85018957 +0000 UTC m=+0.104745612 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, architecture=x86_64)
Jan 06 16:11:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:11:53.732 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:11:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:11:53.733 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:11:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:11:53.733 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:11:54 compute-0 nova_compute[185513]: 2026-01-06 16:11:54.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:11:56 compute-0 podman[250815]: 2026-01-06 16:11:56.849898749 +0000 UTC m=+0.102643723 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 06 16:11:56 compute-0 podman[250814]: 2026-01-06 16:11:56.883327443 +0000 UTC m=+0.143016049 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 06 16:11:59 compute-0 podman[201918]: time="2026-01-06T16:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:11:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:11:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3894 "" "Go-http-client/1.1"
Jan 06 16:11:59 compute-0 podman[250856]: 2026-01-06 16:11:59.868720727 +0000 UTC m=+0.131967980 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, build-date=2024-09-18T21:23:30, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, architecture=x86_64, io.openshift.tags=base rhel9)
Jan 06 16:12:00 compute-0 nova_compute[185513]: 2026-01-06 16:12:00.020 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:12:01 compute-0 openstack_network_exporter[205258]: ERROR   16:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:12:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:12:01 compute-0 openstack_network_exporter[205258]: ERROR   16:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:12:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:12:02 compute-0 nova_compute[185513]: 2026-01-06 16:12:02.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:12:04 compute-0 nova_compute[185513]: 2026-01-06 16:12:04.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:12:04 compute-0 podman[250876]: 2026-01-06 16:12:04.854466612 +0000 UTC m=+0.110130548 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 16:12:04 compute-0 podman[250875]: 2026-01-06 16:12:04.951891658 +0000 UTC m=+0.204189256 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 06 16:12:06 compute-0 nova_compute[185513]: 2026-01-06 16:12:06.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:12:06 compute-0 nova_compute[185513]: 2026-01-06 16:12:06.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:12:06 compute-0 nova_compute[185513]: 2026-01-06 16:12:06.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 16:12:09 compute-0 nova_compute[185513]: 2026-01-06 16:12:09.046 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:12:09 compute-0 nova_compute[185513]: 2026-01-06 16:12:09.047 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 16:12:09 compute-0 nova_compute[185513]: 2026-01-06 16:12:09.048 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 16:12:09 compute-0 nova_compute[185513]: 2026-01-06 16:12:09.064 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 16:12:11 compute-0 nova_compute[185513]: 2026-01-06 16:12:11.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:12:11 compute-0 nova_compute[185513]: 2026-01-06 16:12:11.070 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:12:11 compute-0 nova_compute[185513]: 2026-01-06 16:12:11.071 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:12:11 compute-0 nova_compute[185513]: 2026-01-06 16:12:11.072 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:12:11 compute-0 nova_compute[185513]: 2026-01-06 16:12:11.073 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 16:12:11 compute-0 nova_compute[185513]: 2026-01-06 16:12:11.644 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:12:11 compute-0 nova_compute[185513]: 2026-01-06 16:12:11.647 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5692MB free_disk=72.47795486450195GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 16:12:11 compute-0 nova_compute[185513]: 2026-01-06 16:12:11.647 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:12:11 compute-0 nova_compute[185513]: 2026-01-06 16:12:11.648 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:12:11 compute-0 nova_compute[185513]: 2026-01-06 16:12:11.740 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 16:12:11 compute-0 nova_compute[185513]: 2026-01-06 16:12:11.741 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 16:12:11 compute-0 nova_compute[185513]: 2026-01-06 16:12:11.777 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:12:11 compute-0 nova_compute[185513]: 2026-01-06 16:12:11.807 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:12:11 compute-0 nova_compute[185513]: 2026-01-06 16:12:11.809 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 16:12:11 compute-0 nova_compute[185513]: 2026-01-06 16:12:11.809 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:12:17 compute-0 podman[250922]: 2026-01-06 16:12:17.888335211 +0000 UTC m=+0.142272839 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 06 16:12:19 compute-0 podman[250942]: 2026-01-06 16:12:19.872338766 +0000 UTC m=+0.128476868 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.build-date=20251224, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 06 16:12:20 compute-0 nova_compute[185513]: 2026-01-06 16:12:20.809 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:12:21 compute-0 podman[250962]: 2026-01-06 16:12:21.872881023 +0000 UTC m=+0.124460083 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, version=9.6, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Jan 06 16:12:27 compute-0 podman[250983]: 2026-01-06 16:12:27.842735086 +0000 UTC m=+0.113970829 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Jan 06 16:12:27 compute-0 podman[250984]: 2026-01-06 16:12:27.853299071 +0000 UTC m=+0.105597210 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 16:12:29 compute-0 podman[201918]: time="2026-01-06T16:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:12:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:12:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3902 "" "Go-http-client/1.1"
Jan 06 16:12:30 compute-0 podman[251026]: 2026-01-06 16:12:30.877835058 +0000 UTC m=+0.137442953 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.29.0, version=9.4, com.redhat.component=ubi9-container, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, container_name=kepler, vcs-type=git, managed_by=edpm_ansible, config_id=kepler, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 06 16:12:31 compute-0 openstack_network_exporter[205258]: ERROR   16:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:12:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:12:31 compute-0 openstack_network_exporter[205258]: ERROR   16:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:12:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:12:35 compute-0 podman[251048]: 2026-01-06 16:12:35.867919949 +0000 UTC m=+0.118330513 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 16:12:35 compute-0 podman[251047]: 2026-01-06 16:12:35.938583326 +0000 UTC m=+0.202635507 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 06 16:12:48 compute-0 podman[251093]: 2026-01-06 16:12:48.830239558 +0000 UTC m=+0.099658106 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 06 16:12:50 compute-0 podman[251113]: 2026-01-06 16:12:50.843652631 +0000 UTC m=+0.106094934 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 06 16:12:52 compute-0 podman[251132]: 2026-01-06 16:12:52.808191968 +0000 UTC m=+0.076595973 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter)
Jan 06 16:12:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:12:53.734 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:12:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:12:53.734 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:12:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:12:53.735 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:12:56 compute-0 nova_compute[185513]: 2026-01-06 16:12:56.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:12:58 compute-0 podman[251154]: 2026-01-06 16:12:58.849613771 +0000 UTC m=+0.096933674 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 16:12:58 compute-0 podman[251153]: 2026-01-06 16:12:58.880410556 +0000 UTC m=+0.135142033 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 16:12:59 compute-0 podman[201918]: time="2026-01-06T16:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:12:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:12:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3888 "" "Go-http-client/1.1"
Jan 06 16:13:01 compute-0 nova_compute[185513]: 2026-01-06 16:13:01.020 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:13:01 compute-0 openstack_network_exporter[205258]: ERROR   16:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:13:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:13:01 compute-0 openstack_network_exporter[205258]: ERROR   16:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:13:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:13:01 compute-0 podman[251196]: 2026-01-06 16:13:01.905329691 +0000 UTC m=+0.160506545 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, architecture=x86_64, name=ubi9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.openshift.expose-services=)
Jan 06 16:13:03 compute-0 nova_compute[185513]: 2026-01-06 16:13:03.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:13:06 compute-0 nova_compute[185513]: 2026-01-06 16:13:06.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:13:06 compute-0 nova_compute[185513]: 2026-01-06 16:13:06.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:13:06 compute-0 nova_compute[185513]: 2026-01-06 16:13:06.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 16:13:06 compute-0 podman[251215]: 2026-01-06 16:13:06.837012075 +0000 UTC m=+0.090180588 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 16:13:06 compute-0 podman[251214]: 2026-01-06 16:13:06.959935007 +0000 UTC m=+0.207045851 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 16:13:07 compute-0 nova_compute[185513]: 2026-01-06 16:13:07.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:13:11 compute-0 nova_compute[185513]: 2026-01-06 16:13:11.026 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:13:11 compute-0 nova_compute[185513]: 2026-01-06 16:13:11.027 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 16:13:11 compute-0 nova_compute[185513]: 2026-01-06 16:13:11.028 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 16:13:11 compute-0 nova_compute[185513]: 2026-01-06 16:13:11.079 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 16:13:12 compute-0 nova_compute[185513]: 2026-01-06 16:13:12.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:13:12 compute-0 nova_compute[185513]: 2026-01-06 16:13:12.081 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:13:12 compute-0 nova_compute[185513]: 2026-01-06 16:13:12.082 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:13:12 compute-0 nova_compute[185513]: 2026-01-06 16:13:12.082 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:13:12 compute-0 nova_compute[185513]: 2026-01-06 16:13:12.083 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 16:13:12 compute-0 nova_compute[185513]: 2026-01-06 16:13:12.665 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:13:12 compute-0 nova_compute[185513]: 2026-01-06 16:13:12.667 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5696MB free_disk=72.47795486450195GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 16:13:12 compute-0 nova_compute[185513]: 2026-01-06 16:13:12.668 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:13:12 compute-0 nova_compute[185513]: 2026-01-06 16:13:12.668 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:13:12 compute-0 nova_compute[185513]: 2026-01-06 16:13:12.771 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 16:13:12 compute-0 nova_compute[185513]: 2026-01-06 16:13:12.772 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 16:13:12 compute-0 nova_compute[185513]: 2026-01-06 16:13:12.807 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:13:12 compute-0 nova_compute[185513]: 2026-01-06 16:13:12.839 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:13:12 compute-0 nova_compute[185513]: 2026-01-06 16:13:12.842 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 16:13:12 compute-0 nova_compute[185513]: 2026-01-06 16:13:12.843 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:13:19 compute-0 podman[251262]: 2026-01-06 16:13:19.855334177 +0000 UTC m=+0.114002370 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 06 16:13:20 compute-0 nova_compute[185513]: 2026-01-06 16:13:20.843 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:13:21 compute-0 podman[251280]: 2026-01-06 16:13:21.79484 +0000 UTC m=+0.067196477 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Jan 06 16:13:23 compute-0 podman[251301]: 2026-01-06 16:13:23.871534557 +0000 UTC m=+0.121507036 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vcs-type=git, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 16:13:24 compute-0 nova_compute[185513]: 2026-01-06 16:13:24.019 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:13:29 compute-0 podman[201918]: time="2026-01-06T16:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:13:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:13:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3898 "" "Go-http-client/1.1"
Jan 06 16:13:29 compute-0 podman[251322]: 2026-01-06 16:13:29.871304881 +0000 UTC m=+0.124978817 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 06 16:13:29 compute-0 podman[251323]: 2026-01-06 16:13:29.880894692 +0000 UTC m=+0.123908569 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 06 16:13:31 compute-0 openstack_network_exporter[205258]: ERROR   16:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:13:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:13:31 compute-0 openstack_network_exporter[205258]: ERROR   16:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:13:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:13:32 compute-0 sshd-session[251367]: Accepted publickey for zuul from 38.102.83.46 port 38348 ssh2: RSA SHA256:/tsYtTPHPswvCHUyDjuXJcnXXQRlaCz6QYAgaouSN5U
Jan 06 16:13:32 compute-0 systemd-logind[791]: New session 30 of user zuul.
Jan 06 16:13:32 compute-0 systemd[1]: Started Session 30 of User zuul.
Jan 06 16:13:32 compute-0 sshd-session[251367]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 16:13:32 compute-0 podman[251369]: 2026-01-06 16:13:32.919353973 +0000 UTC m=+0.168338590 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9)
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.091 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.093 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.094 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.109 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.112 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.113 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.111 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.114 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.115 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.116 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.117 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.122 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.122 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.123 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.123 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb7776e0>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'network.incoming.packets.drop': [], 'disk.device.read.requests': [], 'cpu': [], 'disk.device.usage': [], 'network.outgoing.packets.error': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'disk.ephemeral.size': [], 'network.incoming.bytes': [], 'disk.root.size': [], 'network.outgoing.packets': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.124 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.125 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.125 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.125 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.126 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.127 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.127 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.128 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.128 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.128 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.129 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.130 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.131 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.132 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.133 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:13:33.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:13:33 compute-0 sudo[251564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkhkgfnnnfadmkebzpbyxbwpyaghgspf ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767716012.9658508-63066-206501171908750/AnsiballZ_command.py'
Jan 06 16:13:33 compute-0 sudo[251564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 16:13:33 compute-0 python3[251566]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 16:13:34 compute-0 sudo[251564]: pam_unix(sudo:session): session closed for user root
Jan 06 16:13:37 compute-0 podman[251607]: 2026-01-06 16:13:37.821450332 +0000 UTC m=+0.085873705 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 16:13:37 compute-0 podman[251606]: 2026-01-06 16:13:37.913119658 +0000 UTC m=+0.182421138 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 06 16:13:50 compute-0 podman[251658]: 2026-01-06 16:13:50.844316811 +0000 UTC m=+0.102583262 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 06 16:13:52 compute-0 podman[251677]: 2026-01-06 16:13:52.878705843 +0000 UTC m=+0.132824712 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 06 16:13:53 compute-0 nova_compute[185513]: 2026-01-06 16:13:53.328 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Acquiring lock "bba4da1b-3395-4dc5-8781-0f7080001e18" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:13:53 compute-0 nova_compute[185513]: 2026-01-06 16:13:53.330 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "bba4da1b-3395-4dc5-8781-0f7080001e18" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:13:53 compute-0 nova_compute[185513]: 2026-01-06 16:13:53.352 185517 DEBUG nova.compute.manager [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 06 16:13:53 compute-0 nova_compute[185513]: 2026-01-06 16:13:53.480 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:13:53 compute-0 nova_compute[185513]: 2026-01-06 16:13:53.481 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:13:53 compute-0 nova_compute[185513]: 2026-01-06 16:13:53.496 185517 DEBUG nova.virt.hardware [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 06 16:13:53 compute-0 nova_compute[185513]: 2026-01-06 16:13:53.497 185517 INFO nova.compute.claims [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Claim successful on node compute-0.ctlplane.example.com
Jan 06 16:13:53 compute-0 nova_compute[185513]: 2026-01-06 16:13:53.616 185517 DEBUG nova.compute.provider_tree [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:13:53 compute-0 nova_compute[185513]: 2026-01-06 16:13:53.640 185517 DEBUG nova.scheduler.client.report [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:13:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:13:53.736 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:13:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:13:53.737 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:13:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:13:53.737 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:13:53 compute-0 nova_compute[185513]: 2026-01-06 16:13:53.801 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.320s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:13:53 compute-0 nova_compute[185513]: 2026-01-06 16:13:53.802 185517 DEBUG nova.compute.manager [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 06 16:13:53 compute-0 nova_compute[185513]: 2026-01-06 16:13:53.953 185517 DEBUG nova.compute.manager [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 06 16:13:53 compute-0 nova_compute[185513]: 2026-01-06 16:13:53.988 185517 INFO nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 06 16:13:54 compute-0 nova_compute[185513]: 2026-01-06 16:13:54.018 185517 DEBUG nova.compute.manager [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 06 16:13:54 compute-0 nova_compute[185513]: 2026-01-06 16:13:54.096 185517 DEBUG nova.compute.manager [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 06 16:13:54 compute-0 nova_compute[185513]: 2026-01-06 16:13:54.098 185517 DEBUG nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 06 16:13:54 compute-0 nova_compute[185513]: 2026-01-06 16:13:54.099 185517 INFO nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Creating image(s)
Jan 06 16:13:54 compute-0 nova_compute[185513]: 2026-01-06 16:13:54.100 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Acquiring lock "/var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:13:54 compute-0 nova_compute[185513]: 2026-01-06 16:13:54.101 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "/var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:13:54 compute-0 nova_compute[185513]: 2026-01-06 16:13:54.102 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "/var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:13:54 compute-0 nova_compute[185513]: 2026-01-06 16:13:54.103 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Acquiring lock "989dc6cda99877ec35028408d4366cbe851234bb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:13:54 compute-0 nova_compute[185513]: 2026-01-06 16:13:54.103 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "989dc6cda99877ec35028408d4366cbe851234bb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:13:54 compute-0 podman[251697]: 2026-01-06 16:13:54.904575452 +0000 UTC m=+0.161250164 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, managed_by=edpm_ansible, version=9.6, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 06 16:13:56 compute-0 nova_compute[185513]: 2026-01-06 16:13:56.080 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/989dc6cda99877ec35028408d4366cbe851234bb.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:13:56 compute-0 nova_compute[185513]: 2026-01-06 16:13:56.184 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/989dc6cda99877ec35028408d4366cbe851234bb.part --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:13:56 compute-0 nova_compute[185513]: 2026-01-06 16:13:56.188 185517 DEBUG nova.virt.images [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] bb48f187-7bf4-4584-887b-93e4e3789185 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 06 16:13:56 compute-0 nova_compute[185513]: 2026-01-06 16:13:56.190 185517 DEBUG nova.privsep.utils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 06 16:13:56 compute-0 nova_compute[185513]: 2026-01-06 16:13:56.190 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/989dc6cda99877ec35028408d4366cbe851234bb.part /var/lib/nova/instances/_base/989dc6cda99877ec35028408d4366cbe851234bb.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:13:56 compute-0 nova_compute[185513]: 2026-01-06 16:13:56.490 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/989dc6cda99877ec35028408d4366cbe851234bb.part /var/lib/nova/instances/_base/989dc6cda99877ec35028408d4366cbe851234bb.converted" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:13:56 compute-0 nova_compute[185513]: 2026-01-06 16:13:56.499 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/989dc6cda99877ec35028408d4366cbe851234bb.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:13:56 compute-0 nova_compute[185513]: 2026-01-06 16:13:56.600 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/989dc6cda99877ec35028408d4366cbe851234bb.converted --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:13:56 compute-0 nova_compute[185513]: 2026-01-06 16:13:56.603 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "989dc6cda99877ec35028408d4366cbe851234bb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:13:56 compute-0 nova_compute[185513]: 2026-01-06 16:13:56.637 185517 INFO oslo.privsep.daemon [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpg4b3iwj_/privsep.sock']
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.401 185517 INFO oslo.privsep.daemon [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Spawned new privsep daemon via rootwrap
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.261 251734 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.266 251734 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.268 251734 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.268 251734 INFO oslo.privsep.daemon [-] privsep daemon running as pid 251734
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.490 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/989dc6cda99877ec35028408d4366cbe851234bb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.580 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/989dc6cda99877ec35028408d4366cbe851234bb --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.582 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Acquiring lock "989dc6cda99877ec35028408d4366cbe851234bb" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.584 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "989dc6cda99877ec35028408d4366cbe851234bb" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.613 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/989dc6cda99877ec35028408d4366cbe851234bb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.719 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/989dc6cda99877ec35028408d4366cbe851234bb --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.721 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/989dc6cda99877ec35028408d4366cbe851234bb,backing_fmt=raw /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.789 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/989dc6cda99877ec35028408d4366cbe851234bb,backing_fmt=raw /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk 1073741824" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.792 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "989dc6cda99877ec35028408d4366cbe851234bb" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.793 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/989dc6cda99877ec35028408d4366cbe851234bb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.891 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/989dc6cda99877ec35028408d4366cbe851234bb --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.893 185517 DEBUG nova.virt.disk.api [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Checking if we can resize image /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.893 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.969 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.971 185517 DEBUG nova.virt.disk.api [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Cannot resize image /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.971 185517 DEBUG nova.objects.instance [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lazy-loading 'migration_context' on Instance uuid bba4da1b-3395-4dc5-8781-0f7080001e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.992 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Acquiring lock "/var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.993 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "/var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.994 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "/var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.994 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.995 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:13:57 compute-0 nova_compute[185513]: 2026-01-06 16:13:57.995 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.027 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.029 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.078 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.080 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.107 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.205 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.207 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.208 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.233 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.333 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.335 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.411 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.eph0 1073741824" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.413 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.415 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.503 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.505 185517 DEBUG nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.506 185517 DEBUG nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Ensure instance console log exists: /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.508 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.508 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.509 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.515 185517 DEBUG nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-06T16:13:38Z,direct_url=<?>,disk_format='qcow2',id=bb48f187-7bf4-4584-887b-93e4e3789185,min_disk=0,min_ram=0,name='fvt_testing_image',owner='22de66acf9384254aaaaa9230e48fbad',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-06T16:13:42Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'encrypted': False, 'size': 0, 'guest_format': None, 'encryption_options': None, 'encryption_secret_uuid': None, 'image_id': 'bb48f187-7bf4-4584-887b-93e4e3789185'}], 'ephemerals': [{'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vdb', 'encrypted': False, 'size': 1, 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.529 185517 WARNING nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.539 185517 DEBUG nova.virt.libvirt.host [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.541 185517 DEBUG nova.virt.libvirt.host [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.547 185517 DEBUG nova.virt.libvirt.host [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.548 185517 DEBUG nova.virt.libvirt.host [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.549 185517 DEBUG nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.550 185517 DEBUG nova.virt.hardware [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-06T16:13:45Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='f0fea971-6100-4f14-92c5-cf8e781e6434',id=1,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-06T16:13:38Z,direct_url=<?>,disk_format='qcow2',id=bb48f187-7bf4-4584-887b-93e4e3789185,min_disk=0,min_ram=0,name='fvt_testing_image',owner='22de66acf9384254aaaaa9230e48fbad',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-06T16:13:42Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.552 185517 DEBUG nova.virt.hardware [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.552 185517 DEBUG nova.virt.hardware [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.553 185517 DEBUG nova.virt.hardware [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.554 185517 DEBUG nova.virt.hardware [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.554 185517 DEBUG nova.virt.hardware [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.555 185517 DEBUG nova.virt.hardware [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.556 185517 DEBUG nova.virt.hardware [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.557 185517 DEBUG nova.virt.hardware [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.558 185517 DEBUG nova.virt.hardware [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.559 185517 DEBUG nova.virt.hardware [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.566 185517 DEBUG nova.privsep.utils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.569 185517 DEBUG nova.objects.instance [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lazy-loading 'pci_devices' on Instance uuid bba4da1b-3395-4dc5-8781-0f7080001e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.593 185517 DEBUG nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] End _get_guest_xml xml=<domain type="kvm">
Jan 06 16:13:58 compute-0 nova_compute[185513]:   <uuid>bba4da1b-3395-4dc5-8781-0f7080001e18</uuid>
Jan 06 16:13:58 compute-0 nova_compute[185513]:   <name>instance-00000001</name>
Jan 06 16:13:58 compute-0 nova_compute[185513]:   <memory>524288</memory>
Jan 06 16:13:58 compute-0 nova_compute[185513]:   <vcpu>1</vcpu>
Jan 06 16:13:58 compute-0 nova_compute[185513]:   <metadata>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <nova:name>fvt_testing_server</nova:name>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <nova:creationTime>2026-01-06 16:13:58</nova:creationTime>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <nova:flavor name="fvt_testing_flavor">
Jan 06 16:13:58 compute-0 nova_compute[185513]:         <nova:memory>512</nova:memory>
Jan 06 16:13:58 compute-0 nova_compute[185513]:         <nova:disk>1</nova:disk>
Jan 06 16:13:58 compute-0 nova_compute[185513]:         <nova:swap>0</nova:swap>
Jan 06 16:13:58 compute-0 nova_compute[185513]:         <nova:ephemeral>1</nova:ephemeral>
Jan 06 16:13:58 compute-0 nova_compute[185513]:         <nova:vcpus>1</nova:vcpus>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       </nova:flavor>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <nova:owner>
Jan 06 16:13:58 compute-0 nova_compute[185513]:         <nova:user uuid="57adfba8d76549e18b6be1928f7f4e68">admin</nova:user>
Jan 06 16:13:58 compute-0 nova_compute[185513]:         <nova:project uuid="22de66acf9384254aaaaa9230e48fbad">admin</nova:project>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       </nova:owner>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <nova:root type="image" uuid="bb48f187-7bf4-4584-887b-93e4e3789185"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <nova:ports/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     </nova:instance>
Jan 06 16:13:58 compute-0 nova_compute[185513]:   </metadata>
Jan 06 16:13:58 compute-0 nova_compute[185513]:   <sysinfo type="smbios">
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <system>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <entry name="manufacturer">RDO</entry>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <entry name="product">OpenStack Compute</entry>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <entry name="serial">bba4da1b-3395-4dc5-8781-0f7080001e18</entry>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <entry name="uuid">bba4da1b-3395-4dc5-8781-0f7080001e18</entry>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <entry name="family">Virtual Machine</entry>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     </system>
Jan 06 16:13:58 compute-0 nova_compute[185513]:   </sysinfo>
Jan 06 16:13:58 compute-0 nova_compute[185513]:   <os>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <boot dev="hd"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <smbios mode="sysinfo"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:   </os>
Jan 06 16:13:58 compute-0 nova_compute[185513]:   <features>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <acpi/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <apic/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <vmcoreinfo/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:   </features>
Jan 06 16:13:58 compute-0 nova_compute[185513]:   <clock offset="utc">
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <timer name="pit" tickpolicy="delay"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <timer name="hpet" present="no"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:   </clock>
Jan 06 16:13:58 compute-0 nova_compute[185513]:   <cpu mode="host-model" match="exact">
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <topology sockets="1" cores="1" threads="1"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:   </cpu>
Jan 06 16:13:58 compute-0 nova_compute[185513]:   <devices>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <disk type="file" device="disk">
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <source file="/var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <target dev="vda" bus="virtio"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     </disk>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <disk type="file" device="disk">
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <source file="/var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.eph0"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <target dev="vdb" bus="virtio"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     </disk>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <disk type="file" device="cdrom">
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <driver name="qemu" type="raw" cache="none"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <source file="/var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.config"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <target dev="sda" bus="sata"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     </disk>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <serial type="pty">
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <log file="/var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/console.log" append="off"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     </serial>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <video>
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <model type="virtio"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     </video>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <input type="tablet" bus="usb"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <rng model="virtio">
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <backend model="random">/dev/urandom</backend>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     </rng>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="pci" model="pcie-root-port"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <controller type="usb" index="0"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     <memballoon model="virtio">
Jan 06 16:13:58 compute-0 nova_compute[185513]:       <stats period="10"/>
Jan 06 16:13:58 compute-0 nova_compute[185513]:     </memballoon>
Jan 06 16:13:58 compute-0 nova_compute[185513]:   </devices>
Jan 06 16:13:58 compute-0 nova_compute[185513]: </domain>
Jan 06 16:13:58 compute-0 nova_compute[185513]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.688 185517 DEBUG nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.689 185517 DEBUG nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.689 185517 DEBUG nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 06 16:13:58 compute-0 nova_compute[185513]: 2026-01-06 16:13:58.690 185517 INFO nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Using config drive
Jan 06 16:13:59 compute-0 podman[201918]: time="2026-01-06T16:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:13:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:13:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3906 "" "Go-http-client/1.1"
Jan 06 16:14:00 compute-0 nova_compute[185513]: 2026-01-06 16:14:00.023 185517 INFO nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Creating config drive at /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.config
Jan 06 16:14:00 compute-0 nova_compute[185513]: 2026-01-06 16:14:00.032 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaalhnkg_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:14:00 compute-0 nova_compute[185513]: 2026-01-06 16:14:00.181 185517 DEBUG oslo_concurrency.processutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaalhnkg_" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:14:00 compute-0 systemd-machined[156892]: New machine qemu-1-instance-00000001.
Jan 06 16:14:00 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 06 16:14:00 compute-0 podman[251778]: 2026-01-06 16:14:00.422511786 +0000 UTC m=+0.117717057 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 16:14:00 compute-0 podman[251777]: 2026-01-06 16:14:00.433434202 +0000 UTC m=+0.138057969 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 06 16:14:00 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 06 16:14:00 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 06 16:14:00 compute-0 nova_compute[185513]: 2026-01-06 16:14:00.937 185517 DEBUG nova.virt.driver [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Emitting event <LifecycleEvent: 1767716040.935839, bba4da1b-3395-4dc5-8781-0f7080001e18 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 06 16:14:00 compute-0 nova_compute[185513]: 2026-01-06 16:14:00.938 185517 INFO nova.compute.manager [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] VM Resumed (Lifecycle Event)
Jan 06 16:14:00 compute-0 nova_compute[185513]: 2026-01-06 16:14:00.953 185517 DEBUG nova.compute.manager [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 06 16:14:00 compute-0 nova_compute[185513]: 2026-01-06 16:14:00.953 185517 DEBUG nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 06 16:14:00 compute-0 nova_compute[185513]: 2026-01-06 16:14:00.961 185517 INFO nova.virt.libvirt.driver [-] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Instance spawned successfully.
Jan 06 16:14:00 compute-0 nova_compute[185513]: 2026-01-06 16:14:00.962 185517 DEBUG nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 06 16:14:00 compute-0 nova_compute[185513]: 2026-01-06 16:14:00.991 185517 DEBUG nova.compute.manager [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 06 16:14:01 compute-0 nova_compute[185513]: 2026-01-06 16:14:01.009 185517 DEBUG nova.compute.manager [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 06 16:14:01 compute-0 nova_compute[185513]: 2026-01-06 16:14:01.022 185517 DEBUG nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 06 16:14:01 compute-0 nova_compute[185513]: 2026-01-06 16:14:01.023 185517 DEBUG nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 06 16:14:01 compute-0 nova_compute[185513]: 2026-01-06 16:14:01.024 185517 DEBUG nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 06 16:14:01 compute-0 nova_compute[185513]: 2026-01-06 16:14:01.025 185517 DEBUG nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 06 16:14:01 compute-0 nova_compute[185513]: 2026-01-06 16:14:01.026 185517 DEBUG nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 06 16:14:01 compute-0 nova_compute[185513]: 2026-01-06 16:14:01.027 185517 DEBUG nova.virt.libvirt.driver [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 06 16:14:01 compute-0 nova_compute[185513]: 2026-01-06 16:14:01.038 185517 INFO nova.compute.manager [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 06 16:14:01 compute-0 nova_compute[185513]: 2026-01-06 16:14:01.038 185517 DEBUG nova.virt.driver [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] Emitting event <LifecycleEvent: 1767716040.951383, bba4da1b-3395-4dc5-8781-0f7080001e18 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 06 16:14:01 compute-0 nova_compute[185513]: 2026-01-06 16:14:01.039 185517 INFO nova.compute.manager [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] VM Started (Lifecycle Event)
Jan 06 16:14:01 compute-0 nova_compute[185513]: 2026-01-06 16:14:01.065 185517 DEBUG nova.compute.manager [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 06 16:14:01 compute-0 nova_compute[185513]: 2026-01-06 16:14:01.074 185517 DEBUG nova.compute.manager [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 06 16:14:01 compute-0 nova_compute[185513]: 2026-01-06 16:14:01.097 185517 INFO nova.compute.manager [None req-31129cb7-6523-46d6-9c2c-f258cc23ac28 - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 06 16:14:01 compute-0 nova_compute[185513]: 2026-01-06 16:14:01.103 185517 INFO nova.compute.manager [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Took 7.01 seconds to spawn the instance on the hypervisor.
Jan 06 16:14:01 compute-0 nova_compute[185513]: 2026-01-06 16:14:01.104 185517 DEBUG nova.compute.manager [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 06 16:14:01 compute-0 nova_compute[185513]: 2026-01-06 16:14:01.174 185517 INFO nova.compute.manager [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Took 7.75 seconds to build instance.
Jan 06 16:14:01 compute-0 nova_compute[185513]: 2026-01-06 16:14:01.193 185517 DEBUG oslo_concurrency.lockutils [None req-ae591e28-6f42-4e8a-94b2-e9cb2a57bafb 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "bba4da1b-3395-4dc5-8781-0f7080001e18" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:14:01 compute-0 openstack_network_exporter[205258]: ERROR   16:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:14:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:14:01 compute-0 openstack_network_exporter[205258]: ERROR   16:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:14:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:14:03 compute-0 nova_compute[185513]: 2026-01-06 16:14:03.019 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:14:03 compute-0 podman[251854]: 2026-01-06 16:14:03.897853233 +0000 UTC m=+0.152571558 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, release-0.7.12=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vcs-type=git, architecture=x86_64, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 06 16:14:05 compute-0 nova_compute[185513]: 2026-01-06 16:14:05.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:14:06 compute-0 nova_compute[185513]: 2026-01-06 16:14:06.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:14:06 compute-0 nova_compute[185513]: 2026-01-06 16:14:06.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 16:14:07 compute-0 nova_compute[185513]: 2026-01-06 16:14:07.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:14:08 compute-0 nova_compute[185513]: 2026-01-06 16:14:08.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:14:08 compute-0 podman[251875]: 2026-01-06 16:14:08.871091093 +0000 UTC m=+0.112273875 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 16:14:08 compute-0 podman[251874]: 2026-01-06 16:14:08.933386281 +0000 UTC m=+0.180800126 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 16:14:12 compute-0 nova_compute[185513]: 2026-01-06 16:14:12.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:14:12 compute-0 nova_compute[185513]: 2026-01-06 16:14:12.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 16:14:12 compute-0 nova_compute[185513]: 2026-01-06 16:14:12.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 16:14:12 compute-0 nova_compute[185513]: 2026-01-06 16:14:12.945 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "refresh_cache-bba4da1b-3395-4dc5-8781-0f7080001e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 06 16:14:12 compute-0 nova_compute[185513]: 2026-01-06 16:14:12.947 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquired lock "refresh_cache-bba4da1b-3395-4dc5-8781-0f7080001e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 06 16:14:12 compute-0 nova_compute[185513]: 2026-01-06 16:14:12.948 185517 DEBUG nova.network.neutron [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 06 16:14:12 compute-0 nova_compute[185513]: 2026-01-06 16:14:12.949 185517 DEBUG nova.objects.instance [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lazy-loading 'info_cache' on Instance uuid bba4da1b-3395-4dc5-8781-0f7080001e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 06 16:14:13 compute-0 nova_compute[185513]: 2026-01-06 16:14:13.148 185517 DEBUG nova.network.neutron [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 06 16:14:14 compute-0 nova_compute[185513]: 2026-01-06 16:14:14.087 185517 DEBUG nova.network.neutron [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 06 16:14:14 compute-0 nova_compute[185513]: 2026-01-06 16:14:14.108 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Releasing lock "refresh_cache-bba4da1b-3395-4dc5-8781-0f7080001e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 06 16:14:14 compute-0 nova_compute[185513]: 2026-01-06 16:14:14.109 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 06 16:14:14 compute-0 nova_compute[185513]: 2026-01-06 16:14:14.110 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:14:14 compute-0 nova_compute[185513]: 2026-01-06 16:14:14.138 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:14:14 compute-0 nova_compute[185513]: 2026-01-06 16:14:14.139 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:14:14 compute-0 nova_compute[185513]: 2026-01-06 16:14:14.140 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:14:14 compute-0 nova_compute[185513]: 2026-01-06 16:14:14.140 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 16:14:14 compute-0 nova_compute[185513]: 2026-01-06 16:14:14.236 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:14:14 compute-0 nova_compute[185513]: 2026-01-06 16:14:14.344 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:14:14 compute-0 nova_compute[185513]: 2026-01-06 16:14:14.346 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:14:14 compute-0 nova_compute[185513]: 2026-01-06 16:14:14.410 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:14:14 compute-0 nova_compute[185513]: 2026-01-06 16:14:14.413 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:14:14 compute-0 nova_compute[185513]: 2026-01-06 16:14:14.520 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.eph0 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:14:14 compute-0 nova_compute[185513]: 2026-01-06 16:14:14.522 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:14:14 compute-0 nova_compute[185513]: 2026-01-06 16:14:14.619 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:14:15 compute-0 nova_compute[185513]: 2026-01-06 16:14:15.969 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:14:15 compute-0 nova_compute[185513]: 2026-01-06 16:14:15.973 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5539MB free_disk=72.44601821899414GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 16:14:15 compute-0 nova_compute[185513]: 2026-01-06 16:14:15.975 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:14:15 compute-0 nova_compute[185513]: 2026-01-06 16:14:15.976 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:14:16 compute-0 nova_compute[185513]: 2026-01-06 16:14:16.137 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Instance bba4da1b-3395-4dc5-8781-0f7080001e18 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 06 16:14:16 compute-0 nova_compute[185513]: 2026-01-06 16:14:16.139 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 16:14:16 compute-0 nova_compute[185513]: 2026-01-06 16:14:16.140 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 16:14:16 compute-0 nova_compute[185513]: 2026-01-06 16:14:16.223 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing inventories for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 06 16:14:16 compute-0 nova_compute[185513]: 2026-01-06 16:14:16.299 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating ProviderTree inventory for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 06 16:14:16 compute-0 nova_compute[185513]: 2026-01-06 16:14:16.301 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating inventory in ProviderTree for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 06 16:14:16 compute-0 nova_compute[185513]: 2026-01-06 16:14:16.323 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing aggregate associations for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 06 16:14:16 compute-0 nova_compute[185513]: 2026-01-06 16:14:16.357 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing trait associations for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE41,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 06 16:14:16 compute-0 nova_compute[185513]: 2026-01-06 16:14:16.415 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating inventory in ProviderTree for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 06 16:14:16 compute-0 nova_compute[185513]: 2026-01-06 16:14:16.483 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updated inventory for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 06 16:14:16 compute-0 nova_compute[185513]: 2026-01-06 16:14:16.485 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 06 16:14:16 compute-0 nova_compute[185513]: 2026-01-06 16:14:16.485 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating inventory in ProviderTree for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 06 16:14:16 compute-0 nova_compute[185513]: 2026-01-06 16:14:16.520 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 16:14:16 compute-0 nova_compute[185513]: 2026-01-06 16:14:16.522 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:14:16 compute-0 nova_compute[185513]: 2026-01-06 16:14:16.523 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:14:16 compute-0 nova_compute[185513]: 2026-01-06 16:14:16.523 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 06 16:14:17 compute-0 nova_compute[185513]: 2026-01-06 16:14:17.038 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:14:17 compute-0 nova_compute[185513]: 2026-01-06 16:14:17.040 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 06 16:14:17 compute-0 nova_compute[185513]: 2026-01-06 16:14:17.057 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 06 16:14:21 compute-0 nova_compute[185513]: 2026-01-06 16:14:21.044 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:14:21 compute-0 podman[251935]: 2026-01-06 16:14:21.854815312 +0000 UTC m=+0.107394958 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 06 16:14:23 compute-0 podman[251953]: 2026-01-06 16:14:23.840667485 +0000 UTC m=+0.105079577 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 06 16:14:25 compute-0 podman[251974]: 2026-01-06 16:14:25.90612166 +0000 UTC m=+0.161850100 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, config_id=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 16:14:27 compute-0 nova_compute[185513]: 2026-01-06 16:14:27.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:14:29 compute-0 podman[201918]: time="2026-01-06T16:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:14:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:14:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3896 "" "Go-http-client/1.1"
Jan 06 16:14:30 compute-0 podman[252000]: 2026-01-06 16:14:30.886098436 +0000 UTC m=+0.131542189 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 06 16:14:30 compute-0 podman[252001]: 2026-01-06 16:14:30.920615948 +0000 UTC m=+0.155744491 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 16:14:31 compute-0 openstack_network_exporter[205258]: ERROR   16:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:14:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:14:31 compute-0 openstack_network_exporter[205258]: ERROR   16:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:14:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:14:33 compute-0 sshd-session[251380]: Received disconnect from 38.102.83.46 port 38348:11: disconnected by user
Jan 06 16:14:33 compute-0 sshd-session[251380]: Disconnected from user zuul 38.102.83.46 port 38348
Jan 06 16:14:33 compute-0 sshd-session[251367]: pam_unix(sshd:session): session closed for user zuul
Jan 06 16:14:33 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Jan 06 16:14:33 compute-0 systemd[1]: session-30.scope: Consumed 1.357s CPU time.
Jan 06 16:14:33 compute-0 systemd-logind[791]: Session 30 logged out. Waiting for processes to exit.
Jan 06 16:14:33 compute-0 systemd-logind[791]: Removed session 30.
Jan 06 16:14:34 compute-0 podman[252041]: 2026-01-06 16:14:34.917944409 +0000 UTC m=+0.164569071 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, config_id=kepler, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2024-09-18T21:23:30)
Jan 06 16:14:39 compute-0 podman[252062]: 2026-01-06 16:14:39.887735705 +0000 UTC m=+0.127141144 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 16:14:39 compute-0 podman[252061]: 2026-01-06 16:14:39.975786176 +0000 UTC m=+0.220673028 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 06 16:14:52 compute-0 podman[252110]: 2026-01-06 16:14:52.838238187 +0000 UTC m=+0.094026678 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 06 16:14:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:14:53.737 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:14:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:14:53.739 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:14:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:14:53.739 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:14:54 compute-0 podman[252128]: 2026-01-06 16:14:54.843469669 +0000 UTC m=+0.110498558 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute)
Jan 06 16:14:56 compute-0 podman[252148]: 2026-01-06 16:14:56.861342479 +0000 UTC m=+0.126032625 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 06 16:14:58 compute-0 nova_compute[185513]: 2026-01-06 16:14:58.038 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:14:59 compute-0 podman[201918]: time="2026-01-06T16:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:14:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:14:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3902 "" "Go-http-client/1.1"
Jan 06 16:15:01 compute-0 openstack_network_exporter[205258]: ERROR   16:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:15:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:15:01 compute-0 openstack_network_exporter[205258]: ERROR   16:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:15:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:15:01 compute-0 podman[252170]: 2026-01-06 16:15:01.820905274 +0000 UTC m=+0.083185935 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 16:15:01 compute-0 podman[252169]: 2026-01-06 16:15:01.862703556 +0000 UTC m=+0.127044781 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 06 16:15:03 compute-0 nova_compute[185513]: 2026-01-06 16:15:03.020 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:15:05 compute-0 podman[252212]: 2026-01-06 16:15:05.883602064 +0000 UTC m=+0.140186314 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.buildah.version=1.29.0, container_name=kepler, distribution-scope=public, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Jan 06 16:15:07 compute-0 nova_compute[185513]: 2026-01-06 16:15:07.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:15:08 compute-0 nova_compute[185513]: 2026-01-06 16:15:08.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:15:08 compute-0 nova_compute[185513]: 2026-01-06 16:15:08.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:15:08 compute-0 nova_compute[185513]: 2026-01-06 16:15:08.026 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 16:15:10 compute-0 nova_compute[185513]: 2026-01-06 16:15:10.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:15:10 compute-0 podman[252233]: 2026-01-06 16:15:10.849053104 +0000 UTC m=+0.096309478 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 16:15:10 compute-0 podman[252232]: 2026-01-06 16:15:10.909023181 +0000 UTC m=+0.158333929 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 06 16:15:12 compute-0 nova_compute[185513]: 2026-01-06 16:15:12.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:15:12 compute-0 nova_compute[185513]: 2026-01-06 16:15:12.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 16:15:12 compute-0 nova_compute[185513]: 2026-01-06 16:15:12.026 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 16:15:12 compute-0 nova_compute[185513]: 2026-01-06 16:15:12.988 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "refresh_cache-bba4da1b-3395-4dc5-8781-0f7080001e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 06 16:15:12 compute-0 nova_compute[185513]: 2026-01-06 16:15:12.989 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquired lock "refresh_cache-bba4da1b-3395-4dc5-8781-0f7080001e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 06 16:15:12 compute-0 nova_compute[185513]: 2026-01-06 16:15:12.989 185517 DEBUG nova.network.neutron [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 06 16:15:12 compute-0 nova_compute[185513]: 2026-01-06 16:15:12.990 185517 DEBUG nova.objects.instance [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lazy-loading 'info_cache' on Instance uuid bba4da1b-3395-4dc5-8781-0f7080001e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 06 16:15:13 compute-0 nova_compute[185513]: 2026-01-06 16:15:13.192 185517 DEBUG nova.network.neutron [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 06 16:15:13 compute-0 nova_compute[185513]: 2026-01-06 16:15:13.480 185517 DEBUG nova.network.neutron [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 06 16:15:13 compute-0 nova_compute[185513]: 2026-01-06 16:15:13.511 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Releasing lock "refresh_cache-bba4da1b-3395-4dc5-8781-0f7080001e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 06 16:15:13 compute-0 nova_compute[185513]: 2026-01-06 16:15:13.512 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 06 16:15:15 compute-0 nova_compute[185513]: 2026-01-06 16:15:15.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:15:15 compute-0 nova_compute[185513]: 2026-01-06 16:15:15.060 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:15:15 compute-0 nova_compute[185513]: 2026-01-06 16:15:15.061 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:15:15 compute-0 nova_compute[185513]: 2026-01-06 16:15:15.062 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:15:15 compute-0 nova_compute[185513]: 2026-01-06 16:15:15.063 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 16:15:15 compute-0 nova_compute[185513]: 2026-01-06 16:15:15.203 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:15:15 compute-0 nova_compute[185513]: 2026-01-06 16:15:15.277 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:15:15 compute-0 nova_compute[185513]: 2026-01-06 16:15:15.281 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:15:15 compute-0 nova_compute[185513]: 2026-01-06 16:15:15.386 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:15:15 compute-0 nova_compute[185513]: 2026-01-06 16:15:15.388 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:15:15 compute-0 nova_compute[185513]: 2026-01-06 16:15:15.493 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.eph0 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:15:15 compute-0 nova_compute[185513]: 2026-01-06 16:15:15.495 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:15:15 compute-0 nova_compute[185513]: 2026-01-06 16:15:15.598 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.eph0 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:15:16 compute-0 nova_compute[185513]: 2026-01-06 16:15:16.167 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:15:16 compute-0 nova_compute[185513]: 2026-01-06 16:15:16.168 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5481MB free_disk=72.42486190795898GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 16:15:16 compute-0 nova_compute[185513]: 2026-01-06 16:15:16.168 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:15:16 compute-0 nova_compute[185513]: 2026-01-06 16:15:16.168 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:15:16 compute-0 nova_compute[185513]: 2026-01-06 16:15:16.619 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Instance bba4da1b-3395-4dc5-8781-0f7080001e18 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 06 16:15:16 compute-0 nova_compute[185513]: 2026-01-06 16:15:16.620 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 16:15:16 compute-0 nova_compute[185513]: 2026-01-06 16:15:16.620 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 16:15:16 compute-0 nova_compute[185513]: 2026-01-06 16:15:16.677 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:15:16 compute-0 nova_compute[185513]: 2026-01-06 16:15:16.795 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:15:16 compute-0 nova_compute[185513]: 2026-01-06 16:15:16.798 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 16:15:16 compute-0 nova_compute[185513]: 2026-01-06 16:15:16.798 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:15:23 compute-0 podman[252290]: 2026-01-06 16:15:23.905845709 +0000 UTC m=+0.154144289 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 06 16:15:24 compute-0 nova_compute[185513]: 2026-01-06 16:15:24.801 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:15:24 compute-0 nova_compute[185513]: 2026-01-06 16:15:24.926 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:15:25 compute-0 podman[252308]: 2026-01-06 16:15:25.882861313 +0000 UTC m=+0.139755543 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 06 16:15:27 compute-0 podman[252326]: 2026-01-06 16:15:27.843653274 +0000 UTC m=+0.106388131 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, distribution-scope=public, io.openshift.expose-services=, config_id=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 06 16:15:29 compute-0 podman[201918]: time="2026-01-06T16:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:15:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:15:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3903 "" "Go-http-client/1.1"
Jan 06 16:15:31 compute-0 openstack_network_exporter[205258]: ERROR   16:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:15:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:15:31 compute-0 openstack_network_exporter[205258]: ERROR   16:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:15:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:15:32 compute-0 podman[252347]: 2026-01-06 16:15:32.87901752 +0000 UTC m=+0.142203797 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 06 16:15:32 compute-0 podman[252348]: 2026-01-06 16:15:32.910446001 +0000 UTC m=+0.154710304 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.092 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.093 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.093 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.100 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.101 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9ca6106b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.107 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance bba4da1b-3395-4dc5-8781-0f7080001e18 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 06 16:15:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:33.538 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/bba4da1b-3395-4dc5-8781-0f7080001e18 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}d461c27dce12d80eab7ba9a7d978253b94cf0b19fd5c3a616f6f35cad1a7b89e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.016 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1572 Content-Type: application/json Date: Tue, 06 Jan 2026 16:15:33 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-b834a2a8-e9fe-4b1e-b333-e99207be3e1e x-openstack-request-id: req-b834a2a8-e9fe-4b1e-b333-e99207be3e1e _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.017 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "bba4da1b-3395-4dc5-8781-0f7080001e18", "name": "fvt_testing_server", "status": "ACTIVE", "tenant_id": "22de66acf9384254aaaaa9230e48fbad", "user_id": "57adfba8d76549e18b6be1928f7f4e68", "metadata": {}, "hostId": "812699c402b31e891d1b1eda96920a0214c0cc073d4e2b30944ebfc1", "image": {"id": "bb48f187-7bf4-4584-887b-93e4e3789185", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/bb48f187-7bf4-4584-887b-93e4e3789185"}]}, "flavor": {"id": "f0fea971-6100-4f14-92c5-cf8e781e6434", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/f0fea971-6100-4f14-92c5-cf8e781e6434"}]}, "created": "2026-01-06T16:13:50Z", "updated": "2026-01-06T16:14:01Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/bba4da1b-3395-4dc5-8781-0f7080001e18"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/bba4da1b-3395-4dc5-8781-0f7080001e18"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-06T16:14:01.000000", "OS-SRV-USG:terminated_at": null, "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.018 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/bba4da1b-3395-4dc5-8781-0f7080001e18 used request id req-b834a2a8-e9fe-4b1e-b333-e99207be3e1e request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.023 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bba4da1b-3395-4dc5-8781-0f7080001e18', 'name': 'fvt_testing_server', 'flavor': {'id': 'f0fea971-6100-4f14-92c5-cf8e781e6434', 'name': 'fvt_testing_flavor', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'bb48f187-7bf4-4584-887b-93e4e3789185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '22de66acf9384254aaaaa9230e48fbad', 'user_id': '57adfba8d76549e18b6be1928f7f4e68', 'hostId': '812699c402b31e891d1b1eda96920a0214c0cc073d4e2b30944ebfc1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.023 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.023 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.024 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.026 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-06T16:15:34.024996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.032 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.032 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.033 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.033 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.033 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.033 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.034 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-06T16:15:34.033884) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.079 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.082 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.082 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.084 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.084 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.085 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.085 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.085 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-06T16:15:34.085644) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.196 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.197 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.198 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.199 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.199 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.199 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.200 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.200 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.200 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.201 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.read.latency volume: 581415974 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.201 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.read.latency volume: 94144410 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.201 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-06T16:15:34.200705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.202 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.read.latency volume: 67952973 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.203 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.203 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.203 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.204 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.204 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.204 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.205 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.205 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-06T16:15:34.204631) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.205 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.206 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.206 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.206 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.207 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.207 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-06T16:15:34.206772) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.207 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.208 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.208 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.210 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.210 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.210 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.211 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.211 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.211 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-06T16:15:34.211339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.256 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/cpu volume: 30370000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.257 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.258 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.258 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.258 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.258 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.259 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.259 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.259 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-06T16:15:34.258764) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.260 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.261 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.261 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.261 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.261 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.262 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.262 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.262 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-06T16:15:34.261927) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.262 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.262 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.262 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.263 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.263 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.263 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.write.bytes volume: 41787392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-06T16:15:34.263356) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.264 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.264 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.265 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.265 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.266 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.266 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.266 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.266 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.write.latency volume: 1697212164 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.267 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.write.latency volume: 14771886 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.267 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.268 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.268 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-06T16:15:34.266370) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.268 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.269 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.269 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.269 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.269 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.270 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.270 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.271 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.271 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.271 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.272 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.272 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.273 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-06T16:15:34.269545) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.273 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.273 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.273 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.273 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-06T16:15:34.273093) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.274 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.274 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.274 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.275 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-06T16:15:34.274393) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.275 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.275 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.275 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.275 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.275 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.276 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.276 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-06T16:15:34.275907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.276 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.276 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.276 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.277 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.277 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.277 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-06T16:15:34.277106) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.277 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.278 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.278 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.278 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.278 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.278 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.278 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.279 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.279 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.279 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-06T16:15:34.278711) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.279 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.280 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.280 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.280 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.280 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.281 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.281 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.281 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.281 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.281 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.281 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-06T16:15:34.280019) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.282 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.282 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.282 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-06T16:15:34.281182) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.282 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.282 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.283 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.283 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.283 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.283 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.283 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.283 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.284 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.284 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.285 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-06T16:15:34.282381) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.285 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.285 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-06T16:15:34.283950) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.285 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.286 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.286 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.286 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.286 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.286 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.287 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.287 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.287 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.287 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.287 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-06T16:15:34.286774) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.288 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.288 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.288 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-06T16:15:34.288075) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.288 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.290 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.290 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.290 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.290 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.291 14 DEBUG ceilometer.compute.pollsters [-] bba4da1b-3395-4dc5-8781-0f7080001e18/memory.usage volume: 47.53125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.291 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.292 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.292 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-06T16:15:34.290836) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.292 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.292 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.292 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.292 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.293 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.293 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-06T16:15:34.292663) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.293 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.294 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.294 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.294 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-06T16:15:34.293983) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.294 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.295 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.296 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.297 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.298 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:34 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:15:34.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:15:36 compute-0 podman[252399]: 2026-01-06 16:15:36.862504705 +0000 UTC m=+0.121296381 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, architecture=x86_64, release-0.7.12=, vendor=Red Hat, Inc., config_id=kepler, io.openshift.tags=base rhel9)
Jan 06 16:15:41 compute-0 podman[252419]: 2026-01-06 16:15:41.857775224 +0000 UTC m=+0.103017543 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 06 16:15:41 compute-0 podman[252418]: 2026-01-06 16:15:41.913663334 +0000 UTC m=+0.175762354 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 06 16:15:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:15:53.739 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:15:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:15:53.741 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:15:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:15:53.742 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:15:54 compute-0 podman[252467]: 2026-01-06 16:15:54.857090274 +0000 UTC m=+0.106627227 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 06 16:15:56 compute-0 podman[252485]: 2026-01-06 16:15:56.893026319 +0000 UTC m=+0.143471100 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 06 16:15:58 compute-0 nova_compute[185513]: 2026-01-06 16:15:58.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:15:58 compute-0 podman[252505]: 2026-01-06 16:15:58.842017693 +0000 UTC m=+0.096077262 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git)
Jan 06 16:15:59 compute-0 podman[201918]: time="2026-01-06T16:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:15:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:15:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3901 "" "Go-http-client/1.1"
Jan 06 16:16:01 compute-0 openstack_network_exporter[205258]: ERROR   16:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:16:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:16:01 compute-0 openstack_network_exporter[205258]: ERROR   16:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:16:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:16:03 compute-0 nova_compute[185513]: 2026-01-06 16:16:03.019 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:16:03 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 06 16:16:03 compute-0 podman[252526]: 2026-01-06 16:16:03.825913336 +0000 UTC m=+0.104080951 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 06 16:16:03 compute-0 podman[252527]: 2026-01-06 16:16:03.851786522 +0000 UTC m=+0.121757353 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 16:16:07 compute-0 podman[252568]: 2026-01-06 16:16:07.860820647 +0000 UTC m=+0.119283018 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.4, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, maintainer=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, name=ubi9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.buildah.version=1.29.0)
Jan 06 16:16:08 compute-0 nova_compute[185513]: 2026-01-06 16:16:08.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:16:08 compute-0 nova_compute[185513]: 2026-01-06 16:16:08.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:16:09 compute-0 nova_compute[185513]: 2026-01-06 16:16:09.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:16:09 compute-0 nova_compute[185513]: 2026-01-06 16:16:09.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 16:16:12 compute-0 nova_compute[185513]: 2026-01-06 16:16:12.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:16:12 compute-0 podman[252589]: 2026-01-06 16:16:12.875522063 +0000 UTC m=+0.126931858 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 06 16:16:12 compute-0 podman[252588]: 2026-01-06 16:16:12.923549248 +0000 UTC m=+0.175765814 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Jan 06 16:16:14 compute-0 nova_compute[185513]: 2026-01-06 16:16:14.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:16:14 compute-0 nova_compute[185513]: 2026-01-06 16:16:14.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 16:16:14 compute-0 nova_compute[185513]: 2026-01-06 16:16:14.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 16:16:15 compute-0 nova_compute[185513]: 2026-01-06 16:16:15.095 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "refresh_cache-bba4da1b-3395-4dc5-8781-0f7080001e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 06 16:16:15 compute-0 nova_compute[185513]: 2026-01-06 16:16:15.096 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquired lock "refresh_cache-bba4da1b-3395-4dc5-8781-0f7080001e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 06 16:16:15 compute-0 nova_compute[185513]: 2026-01-06 16:16:15.097 185517 DEBUG nova.network.neutron [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 06 16:16:15 compute-0 nova_compute[185513]: 2026-01-06 16:16:15.098 185517 DEBUG nova.objects.instance [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lazy-loading 'info_cache' on Instance uuid bba4da1b-3395-4dc5-8781-0f7080001e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 06 16:16:15 compute-0 nova_compute[185513]: 2026-01-06 16:16:15.230 185517 DEBUG nova.network.neutron [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 06 16:16:16 compute-0 nova_compute[185513]: 2026-01-06 16:16:16.191 185517 DEBUG nova.network.neutron [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 06 16:16:16 compute-0 nova_compute[185513]: 2026-01-06 16:16:16.207 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Releasing lock "refresh_cache-bba4da1b-3395-4dc5-8781-0f7080001e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 06 16:16:16 compute-0 nova_compute[185513]: 2026-01-06 16:16:16.208 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.051 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.051 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.052 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.052 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.155 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.256 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.257 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.347 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.349 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.412 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.413 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.493 185517 DEBUG oslo_concurrency.processutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.837 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.840 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5483MB free_disk=72.42486190795898GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.841 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.841 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.933 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Instance bba4da1b-3395-4dc5-8781-0f7080001e18 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.934 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.934 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.986 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:16:17 compute-0 nova_compute[185513]: 2026-01-06 16:16:17.998 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:16:18 compute-0 nova_compute[185513]: 2026-01-06 16:16:18.000 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 16:16:18 compute-0 nova_compute[185513]: 2026-01-06 16:16:18.000 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:16:25 compute-0 nova_compute[185513]: 2026-01-06 16:16:25.001 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:16:25 compute-0 podman[252650]: 2026-01-06 16:16:25.865626237 +0000 UTC m=+0.117888691 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 06 16:16:27 compute-0 nova_compute[185513]: 2026-01-06 16:16:27.600 185517 DEBUG oslo_concurrency.lockutils [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Acquiring lock "bba4da1b-3395-4dc5-8781-0f7080001e18" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:16:27 compute-0 nova_compute[185513]: 2026-01-06 16:16:27.601 185517 DEBUG oslo_concurrency.lockutils [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "bba4da1b-3395-4dc5-8781-0f7080001e18" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:16:27 compute-0 nova_compute[185513]: 2026-01-06 16:16:27.603 185517 DEBUG oslo_concurrency.lockutils [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Acquiring lock "bba4da1b-3395-4dc5-8781-0f7080001e18-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:16:27 compute-0 nova_compute[185513]: 2026-01-06 16:16:27.604 185517 DEBUG oslo_concurrency.lockutils [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "bba4da1b-3395-4dc5-8781-0f7080001e18-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:16:27 compute-0 nova_compute[185513]: 2026-01-06 16:16:27.604 185517 DEBUG oslo_concurrency.lockutils [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "bba4da1b-3395-4dc5-8781-0f7080001e18-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:16:27 compute-0 nova_compute[185513]: 2026-01-06 16:16:27.608 185517 INFO nova.compute.manager [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Terminating instance
Jan 06 16:16:27 compute-0 nova_compute[185513]: 2026-01-06 16:16:27.610 185517 DEBUG oslo_concurrency.lockutils [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Acquiring lock "refresh_cache-bba4da1b-3395-4dc5-8781-0f7080001e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 06 16:16:27 compute-0 nova_compute[185513]: 2026-01-06 16:16:27.610 185517 DEBUG oslo_concurrency.lockutils [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Acquired lock "refresh_cache-bba4da1b-3395-4dc5-8781-0f7080001e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 06 16:16:27 compute-0 nova_compute[185513]: 2026-01-06 16:16:27.611 185517 DEBUG nova.network.neutron [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 06 16:16:27 compute-0 podman[252669]: 2026-01-06 16:16:27.835803073 +0000 UTC m=+0.090436104 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Jan 06 16:16:28 compute-0 nova_compute[185513]: 2026-01-06 16:16:28.053 185517 DEBUG nova.network.neutron [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 06 16:16:28 compute-0 nova_compute[185513]: 2026-01-06 16:16:28.359 185517 DEBUG nova.network.neutron [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 06 16:16:28 compute-0 nova_compute[185513]: 2026-01-06 16:16:28.728 185517 DEBUG oslo_concurrency.lockutils [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Releasing lock "refresh_cache-bba4da1b-3395-4dc5-8781-0f7080001e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 06 16:16:28 compute-0 nova_compute[185513]: 2026-01-06 16:16:28.730 185517 DEBUG nova.compute.manager [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 06 16:16:28 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 06 16:16:28 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 51.376s CPU time.
Jan 06 16:16:28 compute-0 systemd-machined[156892]: Machine qemu-1-instance-00000001 terminated.
Jan 06 16:16:29 compute-0 podman[252688]: 2026-01-06 16:16:29.033208595 +0000 UTC m=+0.132221046 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 16:16:29 compute-0 nova_compute[185513]: 2026-01-06 16:16:29.044 185517 INFO nova.virt.libvirt.driver [-] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Instance destroyed successfully.
Jan 06 16:16:29 compute-0 nova_compute[185513]: 2026-01-06 16:16:29.046 185517 DEBUG nova.objects.instance [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lazy-loading 'resources' on Instance uuid bba4da1b-3395-4dc5-8781-0f7080001e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 06 16:16:29 compute-0 nova_compute[185513]: 2026-01-06 16:16:29.079 185517 INFO nova.virt.libvirt.driver [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Deleting instance files /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18_del
Jan 06 16:16:29 compute-0 nova_compute[185513]: 2026-01-06 16:16:29.080 185517 INFO nova.virt.libvirt.driver [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Deletion of /var/lib/nova/instances/bba4da1b-3395-4dc5-8781-0f7080001e18_del complete
Jan 06 16:16:29 compute-0 nova_compute[185513]: 2026-01-06 16:16:29.387 185517 DEBUG nova.virt.libvirt.host [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Jan 06 16:16:29 compute-0 nova_compute[185513]: 2026-01-06 16:16:29.388 185517 INFO nova.virt.libvirt.host [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] UEFI support detected
Jan 06 16:16:29 compute-0 nova_compute[185513]: 2026-01-06 16:16:29.395 185517 INFO nova.compute.manager [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Took 0.66 seconds to destroy the instance on the hypervisor.
Jan 06 16:16:29 compute-0 nova_compute[185513]: 2026-01-06 16:16:29.396 185517 DEBUG oslo.service.loopingcall [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 06 16:16:29 compute-0 nova_compute[185513]: 2026-01-06 16:16:29.396 185517 DEBUG nova.compute.manager [-] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 06 16:16:29 compute-0 nova_compute[185513]: 2026-01-06 16:16:29.397 185517 DEBUG nova.network.neutron [-] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 06 16:16:29 compute-0 podman[201918]: time="2026-01-06T16:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:16:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:16:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3902 "" "Go-http-client/1.1"
Jan 06 16:16:29 compute-0 nova_compute[185513]: 2026-01-06 16:16:29.969 185517 DEBUG nova.network.neutron [-] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 06 16:16:29 compute-0 nova_compute[185513]: 2026-01-06 16:16:29.986 185517 DEBUG nova.network.neutron [-] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 06 16:16:30 compute-0 nova_compute[185513]: 2026-01-06 16:16:30.019 185517 INFO nova.compute.manager [-] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Took 0.62 seconds to deallocate network for instance.
Jan 06 16:16:30 compute-0 nova_compute[185513]: 2026-01-06 16:16:30.180 185517 DEBUG oslo_concurrency.lockutils [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:16:30 compute-0 nova_compute[185513]: 2026-01-06 16:16:30.182 185517 DEBUG oslo_concurrency.lockutils [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:16:30 compute-0 nova_compute[185513]: 2026-01-06 16:16:30.261 185517 DEBUG nova.compute.provider_tree [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:16:30 compute-0 nova_compute[185513]: 2026-01-06 16:16:30.384 185517 DEBUG nova.scheduler.client.report [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:16:30 compute-0 nova_compute[185513]: 2026-01-06 16:16:30.409 185517 DEBUG oslo_concurrency.lockutils [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:16:30 compute-0 nova_compute[185513]: 2026-01-06 16:16:30.438 185517 INFO nova.scheduler.client.report [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Deleted allocations for instance bba4da1b-3395-4dc5-8781-0f7080001e18
Jan 06 16:16:30 compute-0 nova_compute[185513]: 2026-01-06 16:16:30.555 185517 DEBUG oslo_concurrency.lockutils [None req-f2c3de04-ed28-4ab5-97f9-53aeb2b18269 57adfba8d76549e18b6be1928f7f4e68 22de66acf9384254aaaaa9230e48fbad - - default default] Lock "bba4da1b-3395-4dc5-8781-0f7080001e18" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:16:31 compute-0 openstack_network_exporter[205258]: ERROR   16:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:16:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:16:31 compute-0 openstack_network_exporter[205258]: ERROR   16:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:16:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:16:34 compute-0 podman[252719]: 2026-01-06 16:16:34.848714195 +0000 UTC m=+0.107268194 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi)
Jan 06 16:16:34 compute-0 podman[252720]: 2026-01-06 16:16:34.861931011 +0000 UTC m=+0.113085516 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 06 16:16:38 compute-0 podman[252761]: 2026-01-06 16:16:38.886105869 +0000 UTC m=+0.138015657 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, name=ubi9, config_id=kepler, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30)
Jan 06 16:16:43 compute-0 podman[252782]: 2026-01-06 16:16:43.844826269 +0000 UTC m=+0.100279192 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 16:16:43 compute-0 podman[252781]: 2026-01-06 16:16:43.927749995 +0000 UTC m=+0.182984462 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 16:16:44 compute-0 nova_compute[185513]: 2026-01-06 16:16:44.041 185517 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767716189.039068, bba4da1b-3395-4dc5-8781-0f7080001e18 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 06 16:16:44 compute-0 nova_compute[185513]: 2026-01-06 16:16:44.041 185517 INFO nova.compute.manager [-] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] VM Stopped (Lifecycle Event)
Jan 06 16:16:44 compute-0 nova_compute[185513]: 2026-01-06 16:16:44.226 185517 DEBUG nova.compute.manager [None req-bfad9d74-e2bd-4aac-ba5b-8267d7e1affc - - - - - -] [instance: bba4da1b-3395-4dc5-8781-0f7080001e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 06 16:16:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:16:53.740 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:16:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:16:53.742 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:16:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:16:53.742 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:16:56 compute-0 podman[252830]: 2026-01-06 16:16:56.878710606 +0000 UTC m=+0.116455265 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 06 16:16:58 compute-0 nova_compute[185513]: 2026-01-06 16:16:58.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:16:58 compute-0 podman[252851]: 2026-01-06 16:16:58.8680645 +0000 UTC m=+0.134805513 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 06 16:16:59 compute-0 podman[201918]: time="2026-01-06T16:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:16:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:16:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3903 "" "Go-http-client/1.1"
Jan 06 16:16:59 compute-0 podman[252871]: 2026-01-06 16:16:59.887347417 +0000 UTC m=+0.140242806 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vendor=Red Hat, Inc., name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Jan 06 16:17:01 compute-0 openstack_network_exporter[205258]: ERROR   16:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:17:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:17:01 compute-0 openstack_network_exporter[205258]: ERROR   16:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:17:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:17:05 compute-0 nova_compute[185513]: 2026-01-06 16:17:05.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:17:05 compute-0 podman[252892]: 2026-01-06 16:17:05.880682073 +0000 UTC m=+0.130204444 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 06 16:17:05 compute-0 podman[252893]: 2026-01-06 16:17:05.897368819 +0000 UTC m=+0.133683005 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 16:17:06 compute-0 sshd-session[252931]: Accepted publickey for zuul from 38.102.83.46 port 37056 ssh2: RSA SHA256:/tsYtTPHPswvCHUyDjuXJcnXXQRlaCz6QYAgaouSN5U
Jan 06 16:17:06 compute-0 systemd-logind[791]: New session 31 of user zuul.
Jan 06 16:17:06 compute-0 systemd[1]: Started Session 31 of User zuul.
Jan 06 16:17:06 compute-0 sshd-session[252931]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 16:17:07 compute-0 sudo[253108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojhwbvlopzqjjwjvxlgwvidymxsyrjog ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767716226.3747854-63893-116451121479143/AnsiballZ_command.py'
Jan 06 16:17:07 compute-0 sudo[253108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 16:17:07 compute-0 python3[253110]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 16:17:07 compute-0 sudo[253108]: pam_unix(sudo:session): session closed for user root
Jan 06 16:17:09 compute-0 podman[253150]: 2026-01-06 16:17:09.892464448 +0000 UTC m=+0.126890677 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.29.0, name=ubi9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, build-date=2024-09-18T21:23:30, container_name=kepler, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., architecture=x86_64, vendor=Red Hat, Inc., config_id=kepler, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 06 16:17:10 compute-0 nova_compute[185513]: 2026-01-06 16:17:10.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:17:10 compute-0 nova_compute[185513]: 2026-01-06 16:17:10.028 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:17:11 compute-0 nova_compute[185513]: 2026-01-06 16:17:11.022 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:17:11 compute-0 nova_compute[185513]: 2026-01-06 16:17:11.023 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 16:17:14 compute-0 nova_compute[185513]: 2026-01-06 16:17:14.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:17:14 compute-0 nova_compute[185513]: 2026-01-06 16:17:14.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 16:17:14 compute-0 nova_compute[185513]: 2026-01-06 16:17:14.026 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 16:17:14 compute-0 podman[253172]: 2026-01-06 16:17:14.160485939 +0000 UTC m=+0.083648657 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 16:17:14 compute-0 nova_compute[185513]: 2026-01-06 16:17:14.251 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 16:17:14 compute-0 nova_compute[185513]: 2026-01-06 16:17:14.252 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:17:14 compute-0 podman[253171]: 2026-01-06 16:17:14.253302124 +0000 UTC m=+0.160192097 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 06 16:17:15 compute-0 sudo[253396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvcmbhezkvhxguyhywijtqjfeftlkyxp ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767716234.3496394-64056-233609602008637/AnsiballZ_command.py'
Jan 06 16:17:15 compute-0 sudo[253396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 16:17:15 compute-0 python3[253398]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 16:17:15 compute-0 sudo[253396]: pam_unix(sudo:session): session closed for user root
Jan 06 16:17:19 compute-0 nova_compute[185513]: 2026-01-06 16:17:19.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:17:19 compute-0 nova_compute[185513]: 2026-01-06 16:17:19.890 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:17:19 compute-0 nova_compute[185513]: 2026-01-06 16:17:19.891 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:17:19 compute-0 nova_compute[185513]: 2026-01-06 16:17:19.891 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:17:19 compute-0 nova_compute[185513]: 2026-01-06 16:17:19.892 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 16:17:20 compute-0 nova_compute[185513]: 2026-01-06 16:17:20.377 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:17:20 compute-0 nova_compute[185513]: 2026-01-06 16:17:20.379 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5612MB free_disk=72.44752883911133GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 16:17:20 compute-0 nova_compute[185513]: 2026-01-06 16:17:20.379 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:17:20 compute-0 nova_compute[185513]: 2026-01-06 16:17:20.379 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:17:20 compute-0 nova_compute[185513]: 2026-01-06 16:17:20.493 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 16:17:20 compute-0 nova_compute[185513]: 2026-01-06 16:17:20.494 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 16:17:20 compute-0 nova_compute[185513]: 2026-01-06 16:17:20.524 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:17:20 compute-0 nova_compute[185513]: 2026-01-06 16:17:20.541 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:17:20 compute-0 nova_compute[185513]: 2026-01-06 16:17:20.567 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 16:17:20 compute-0 nova_compute[185513]: 2026-01-06 16:17:20.568 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.189s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:17:25 compute-0 sudo[253610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wozzikuwnkwilmxwcxmavhyudqryfspt ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767716244.3397768-64208-14525589500794/AnsiballZ_command.py'
Jan 06 16:17:25 compute-0 sudo[253610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 16:17:25 compute-0 python3[253612]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 16:17:25 compute-0 sudo[253610]: pam_unix(sudo:session): session closed for user root
Jan 06 16:17:26 compute-0 nova_compute[185513]: 2026-01-06 16:17:26.566 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:17:27 compute-0 nova_compute[185513]: 2026-01-06 16:17:27.018 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:17:27 compute-0 podman[253651]: 2026-01-06 16:17:27.898920108 +0000 UTC m=+0.144397414 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 06 16:17:29 compute-0 podman[201918]: time="2026-01-06T16:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:17:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:17:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3911 "" "Go-http-client/1.1"
Jan 06 16:17:29 compute-0 podman[253669]: 2026-01-06 16:17:29.903371618 +0000 UTC m=+0.167108308 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224)
Jan 06 16:17:30 compute-0 podman[253689]: 2026-01-06 16:17:30.065617797 +0000 UTC m=+0.115933080 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.6, release=1755695350, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 16:17:31 compute-0 openstack_network_exporter[205258]: ERROR   16:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:17:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:17:31 compute-0 openstack_network_exporter[205258]: ERROR   16:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:17:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.095 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.098 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.102 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.103 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.104 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.105 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.106 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.107 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.108 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.108 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.109 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.110 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{'network.outgoing.packets.drop': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.110 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.111 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.112 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.112 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.112 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.113 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.113 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.114 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.114 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.115 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.116 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.116 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.117 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.118 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.119 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.120 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.121 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:17:33.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:17:36 compute-0 podman[253711]: 2026-01-06 16:17:36.836449541 +0000 UTC m=+0.091660367 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 06 16:17:36 compute-0 podman[253710]: 2026-01-06 16:17:36.879597778 +0000 UTC m=+0.133520750 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 06 16:17:39 compute-0 sudo[253925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llqbibvefqyltrchlnougicvswuavros ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767716258.7479584-64427-201575873631013/AnsiballZ_command.py'
Jan 06 16:17:39 compute-0 sudo[253925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 16:17:39 compute-0 python3[253927]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 06 16:17:39 compute-0 sudo[253925]: pam_unix(sudo:session): session closed for user root
Jan 06 16:17:40 compute-0 podman[253966]: 2026-01-06 16:17:40.857859977 +0000 UTC m=+0.117563753 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=kepler, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 06 16:17:44 compute-0 podman[253986]: 2026-01-06 16:17:44.871717367 +0000 UTC m=+0.140980915 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 06 16:17:44 compute-0 podman[253985]: 2026-01-06 16:17:44.901642589 +0000 UTC m=+0.189779290 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 06 16:17:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:17:53.743 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:17:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:17:53.744 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:17:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:17:53.745 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:17:58 compute-0 podman[254038]: 2026-01-06 16:17:58.869791331 +0000 UTC m=+0.133798107 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 16:17:59 compute-0 podman[201918]: time="2026-01-06T16:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:17:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:17:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3913 "" "Go-http-client/1.1"
Jan 06 16:18:00 compute-0 nova_compute[185513]: 2026-01-06 16:18:00.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:18:00 compute-0 podman[254057]: 2026-01-06 16:18:00.889799348 +0000 UTC m=+0.138445149 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 16:18:00 compute-0 podman[254056]: 2026-01-06 16:18:00.897009337 +0000 UTC m=+0.151160081 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2)
Jan 06 16:18:01 compute-0 openstack_network_exporter[205258]: ERROR   16:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:18:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:18:01 compute-0 openstack_network_exporter[205258]: ERROR   16:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:18:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:18:05 compute-0 nova_compute[185513]: 2026-01-06 16:18:05.020 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:18:07 compute-0 podman[254094]: 2026-01-06 16:18:07.836556269 +0000 UTC m=+0.095089066 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 16:18:07 compute-0 podman[254093]: 2026-01-06 16:18:07.843649635 +0000 UTC m=+0.105197900 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 06 16:18:10 compute-0 nova_compute[185513]: 2026-01-06 16:18:10.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:18:10 compute-0 nova_compute[185513]: 2026-01-06 16:18:10.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:18:11 compute-0 nova_compute[185513]: 2026-01-06 16:18:11.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:18:11 compute-0 nova_compute[185513]: 2026-01-06 16:18:11.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 16:18:11 compute-0 podman[254132]: 2026-01-06 16:18:11.848071577 +0000 UTC m=+0.109769150 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, distribution-scope=public, name=ubi9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, container_name=kepler, config_id=kepler, version=9.4, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 16:18:15 compute-0 nova_compute[185513]: 2026-01-06 16:18:15.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:18:15 compute-0 nova_compute[185513]: 2026-01-06 16:18:15.027 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 16:18:15 compute-0 nova_compute[185513]: 2026-01-06 16:18:15.028 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 16:18:15 compute-0 nova_compute[185513]: 2026-01-06 16:18:15.053 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 16:18:15 compute-0 nova_compute[185513]: 2026-01-06 16:18:15.055 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:18:16 compute-0 podman[254153]: 2026-01-06 16:18:16.055876894 +0000 UTC m=+0.115905480 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 06 16:18:16 compute-0 podman[254152]: 2026-01-06 16:18:16.117823172 +0000 UTC m=+0.180638000 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 06 16:18:20 compute-0 nova_compute[185513]: 2026-01-06 16:18:20.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:18:20 compute-0 nova_compute[185513]: 2026-01-06 16:18:20.072 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:18:20 compute-0 nova_compute[185513]: 2026-01-06 16:18:20.073 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:18:20 compute-0 nova_compute[185513]: 2026-01-06 16:18:20.074 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:18:20 compute-0 nova_compute[185513]: 2026-01-06 16:18:20.075 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 16:18:20 compute-0 nova_compute[185513]: 2026-01-06 16:18:20.586 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:18:20 compute-0 nova_compute[185513]: 2026-01-06 16:18:20.587 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5646MB free_disk=72.44655227661133GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 16:18:20 compute-0 nova_compute[185513]: 2026-01-06 16:18:20.588 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:18:20 compute-0 nova_compute[185513]: 2026-01-06 16:18:20.588 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:18:20 compute-0 nova_compute[185513]: 2026-01-06 16:18:20.661 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 16:18:20 compute-0 nova_compute[185513]: 2026-01-06 16:18:20.661 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 16:18:20 compute-0 nova_compute[185513]: 2026-01-06 16:18:20.688 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:18:20 compute-0 nova_compute[185513]: 2026-01-06 16:18:20.718 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:18:20 compute-0 nova_compute[185513]: 2026-01-06 16:18:20.719 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 16:18:20 compute-0 nova_compute[185513]: 2026-01-06 16:18:20.720 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:18:27 compute-0 nova_compute[185513]: 2026-01-06 16:18:27.723 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:18:29 compute-0 podman[201918]: time="2026-01-06T16:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:18:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:18:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3908 "" "Go-http-client/1.1"
Jan 06 16:18:29 compute-0 podman[254202]: 2026-01-06 16:18:29.884693827 +0000 UTC m=+0.142344681 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Jan 06 16:18:31 compute-0 openstack_network_exporter[205258]: ERROR   16:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:18:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:18:31 compute-0 openstack_network_exporter[205258]: ERROR   16:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:18:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:18:31 compute-0 podman[254220]: 2026-01-06 16:18:31.860339644 +0000 UTC m=+0.130714727 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2)
Jan 06 16:18:31 compute-0 podman[254221]: 2026-01-06 16:18:31.871726931 +0000 UTC m=+0.124454143 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 16:18:38 compute-0 podman[254260]: 2026-01-06 16:18:38.814612622 +0000 UTC m=+0.070793821 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 06 16:18:38 compute-0 podman[254259]: 2026-01-06 16:18:38.823094743 +0000 UTC m=+0.086811639 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 16:18:39 compute-0 sshd-session[252934]: Received disconnect from 38.102.83.46 port 37056:11: disconnected by user
Jan 06 16:18:39 compute-0 sshd-session[252934]: Disconnected from user zuul 38.102.83.46 port 37056
Jan 06 16:18:39 compute-0 sshd-session[252931]: pam_unix(sshd:session): session closed for user zuul
Jan 06 16:18:39 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Jan 06 16:18:39 compute-0 systemd[1]: session-31.scope: Consumed 5.390s CPU time.
Jan 06 16:18:39 compute-0 systemd-logind[791]: Session 31 logged out. Waiting for processes to exit.
Jan 06 16:18:39 compute-0 systemd-logind[791]: Removed session 31.
Jan 06 16:18:42 compute-0 podman[254300]: 2026-01-06 16:18:42.873028105 +0000 UTC m=+0.124618367 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, managed_by=edpm_ansible, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, distribution-scope=public, version=9.4, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vendor=Red Hat, Inc.)
Jan 06 16:18:46 compute-0 podman[254321]: 2026-01-06 16:18:46.879008768 +0000 UTC m=+0.127634887 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 06 16:18:46 compute-0 podman[254320]: 2026-01-06 16:18:46.949086009 +0000 UTC m=+0.206683922 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 06 16:18:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:18:53.745 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:18:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:18:53.746 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:18:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:18:53.746 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:18:59 compute-0 podman[201918]: time="2026-01-06T16:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:18:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:18:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3912 "" "Go-http-client/1.1"
Jan 06 16:19:00 compute-0 nova_compute[185513]: 2026-01-06 16:19:00.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:19:00 compute-0 podman[254368]: 2026-01-06 16:19:00.850351294 +0000 UTC m=+0.105487268 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 06 16:19:01 compute-0 openstack_network_exporter[205258]: ERROR   16:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:19:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:19:01 compute-0 openstack_network_exporter[205258]: ERROR   16:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:19:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:19:02 compute-0 podman[254386]: 2026-01-06 16:19:02.829578334 +0000 UTC m=+0.092343124 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Jan 06 16:19:02 compute-0 podman[254387]: 2026-01-06 16:19:02.838287892 +0000 UTC m=+0.100079486 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible, version=9.6, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, distribution-scope=public, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 06 16:19:07 compute-0 nova_compute[185513]: 2026-01-06 16:19:07.020 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:19:09 compute-0 podman[254427]: 2026-01-06 16:19:09.86870608 +0000 UTC m=+0.128022336 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 06 16:19:09 compute-0 podman[254428]: 2026-01-06 16:19:09.876003551 +0000 UTC m=+0.119037821 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 06 16:19:10 compute-0 nova_compute[185513]: 2026-01-06 16:19:10.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:19:12 compute-0 nova_compute[185513]: 2026-01-06 16:19:12.025 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:19:13 compute-0 nova_compute[185513]: 2026-01-06 16:19:13.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:19:13 compute-0 nova_compute[185513]: 2026-01-06 16:19:13.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 16:19:13 compute-0 podman[254471]: 2026-01-06 16:19:13.850215084 +0000 UTC m=+0.108186558 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, config_id=kepler, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, distribution-scope=public, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9, container_name=kepler, vcs-type=git)
Jan 06 16:19:16 compute-0 nova_compute[185513]: 2026-01-06 16:19:16.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:19:16 compute-0 nova_compute[185513]: 2026-01-06 16:19:16.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 16:19:16 compute-0 nova_compute[185513]: 2026-01-06 16:19:16.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 16:19:16 compute-0 nova_compute[185513]: 2026-01-06 16:19:16.059 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 16:19:17 compute-0 nova_compute[185513]: 2026-01-06 16:19:17.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:19:17 compute-0 podman[254489]: 2026-01-06 16:19:17.836319007 +0000 UTC m=+0.083923074 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 06 16:19:17 compute-0 podman[254488]: 2026-01-06 16:19:17.863847627 +0000 UTC m=+0.124991268 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 06 16:19:21 compute-0 nova_compute[185513]: 2026-01-06 16:19:21.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:19:21 compute-0 nova_compute[185513]: 2026-01-06 16:19:21.095 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:19:21 compute-0 nova_compute[185513]: 2026-01-06 16:19:21.096 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:19:21 compute-0 nova_compute[185513]: 2026-01-06 16:19:21.096 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:19:21 compute-0 nova_compute[185513]: 2026-01-06 16:19:21.096 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 16:19:21 compute-0 nova_compute[185513]: 2026-01-06 16:19:21.555 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:19:21 compute-0 nova_compute[185513]: 2026-01-06 16:19:21.558 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5656MB free_disk=72.44655227661133GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 16:19:21 compute-0 nova_compute[185513]: 2026-01-06 16:19:21.558 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:19:21 compute-0 nova_compute[185513]: 2026-01-06 16:19:21.559 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:19:21 compute-0 nova_compute[185513]: 2026-01-06 16:19:21.823 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 16:19:21 compute-0 nova_compute[185513]: 2026-01-06 16:19:21.824 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 16:19:21 compute-0 nova_compute[185513]: 2026-01-06 16:19:21.923 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing inventories for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 06 16:19:22 compute-0 nova_compute[185513]: 2026-01-06 16:19:22.011 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating ProviderTree inventory for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 06 16:19:22 compute-0 nova_compute[185513]: 2026-01-06 16:19:22.012 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Updating inventory in ProviderTree for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 06 16:19:22 compute-0 nova_compute[185513]: 2026-01-06 16:19:22.030 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing aggregate associations for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 06 16:19:22 compute-0 nova_compute[185513]: 2026-01-06 16:19:22.056 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Refreshing trait associations for resource provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608, traits: COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE41,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 06 16:19:22 compute-0 nova_compute[185513]: 2026-01-06 16:19:22.091 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:19:22 compute-0 nova_compute[185513]: 2026-01-06 16:19:22.122 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:19:22 compute-0 nova_compute[185513]: 2026-01-06 16:19:22.125 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 16:19:22 compute-0 nova_compute[185513]: 2026-01-06 16:19:22.126 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:19:22 compute-0 nova_compute[185513]: 2026-01-06 16:19:22.127 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:19:22 compute-0 nova_compute[185513]: 2026-01-06 16:19:22.128 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 06 16:19:22 compute-0 nova_compute[185513]: 2026-01-06 16:19:22.144 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 06 16:19:26 compute-0 nova_compute[185513]: 2026-01-06 16:19:26.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:19:26 compute-0 nova_compute[185513]: 2026-01-06 16:19:26.024 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 06 16:19:29 compute-0 nova_compute[185513]: 2026-01-06 16:19:29.037 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:19:29 compute-0 podman[201918]: time="2026-01-06T16:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:19:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:19:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3905 "" "Go-http-client/1.1"
Jan 06 16:19:30 compute-0 nova_compute[185513]: 2026-01-06 16:19:30.019 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:19:31 compute-0 openstack_network_exporter[205258]: ERROR   16:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:19:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:19:31 compute-0 openstack_network_exporter[205258]: ERROR   16:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:19:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:19:31 compute-0 podman[254536]: 2026-01-06 16:19:31.86348381 +0000 UTC m=+0.116007432 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.094 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.095 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe9d0d39cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3b9e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3ba40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5280>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3baa0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bb60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bbc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d2bd5c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bc80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39c70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39d60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d38590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0e465d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39df0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d3bef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe9d0d39f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe9cb75a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe9d246b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe9d0d3b890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe9d0d3b9b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe9d0d39c40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe9d0d3ba10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe9d339c200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe9d0d3ba70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe9d0d39ac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe9d0d3bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe9d0d3bb30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe9d0d3bb90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe9d0d39d30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe9d0d3bbf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe9d3e188c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe9d0d3bc50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe9d0e69e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe9d0d39ee0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe9d0d39f40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe9d0d38560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.104 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe9d0e46510>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe9d0d39dc0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe9d0d39e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.105 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe9d0d3be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe9d0d3bf20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe9d0d3bf50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe9d0ea3dd0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.106 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 ceilometer_agent_compute[195413]: 2026-01-06 16:19:33.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 06 16:19:33 compute-0 podman[254557]: 2026-01-06 16:19:33.883592811 +0000 UTC m=+0.128603202 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 06 16:19:33 compute-0 podman[254556]: 2026-01-06 16:19:33.912821694 +0000 UTC m=+0.165408543 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 06 16:19:39 compute-0 nova_compute[185513]: 2026-01-06 16:19:39.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:19:40 compute-0 podman[254594]: 2026-01-06 16:19:40.832370903 +0000 UTC m=+0.087855437 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 06 16:19:40 compute-0 podman[254593]: 2026-01-06 16:19:40.862206653 +0000 UTC m=+0.113279891 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true)
Jan 06 16:19:44 compute-0 podman[254636]: 2026-01-06 16:19:44.841774716 +0000 UTC m=+0.134426133 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, maintainer=Red Hat, Inc., distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=base rhel9)
Jan 06 16:19:48 compute-0 podman[254658]: 2026-01-06 16:19:48.871569603 +0000 UTC m=+0.118295663 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 16:19:48 compute-0 podman[254657]: 2026-01-06 16:19:48.962433497 +0000 UTC m=+0.213404698 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 06 16:19:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:19:53.746 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:19:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:19:53.748 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:19:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:19:53.748 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:19:59 compute-0 podman[201918]: time="2026-01-06T16:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:19:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:19:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3906 "" "Go-http-client/1.1"
Jan 06 16:20:01 compute-0 nova_compute[185513]: 2026-01-06 16:20:01.183 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:20:01 compute-0 openstack_network_exporter[205258]: ERROR   16:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:20:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:20:01 compute-0 openstack_network_exporter[205258]: ERROR   16:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:20:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:20:02 compute-0 podman[254707]: 2026-01-06 16:20:02.883691544 +0000 UTC m=+0.140091562 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Jan 06 16:20:04 compute-0 podman[254726]: 2026-01-06 16:20:04.881404697 +0000 UTC m=+0.132613347 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible)
Jan 06 16:20:04 compute-0 podman[254727]: 2026-01-06 16:20:04.881484039 +0000 UTC m=+0.128704334 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7)
Jan 06 16:20:09 compute-0 nova_compute[185513]: 2026-01-06 16:20:09.019 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:20:10 compute-0 nova_compute[185513]: 2026-01-06 16:20:10.649 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:20:11 compute-0 nova_compute[185513]: 2026-01-06 16:20:11.048 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:20:11 compute-0 podman[254765]: 2026-01-06 16:20:11.902726586 +0000 UTC m=+0.161745928 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 06 16:20:11 compute-0 podman[254764]: 2026-01-06 16:20:11.91781583 +0000 UTC m=+0.173943676 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 06 16:20:14 compute-0 nova_compute[185513]: 2026-01-06 16:20:14.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:20:14 compute-0 nova_compute[185513]: 2026-01-06 16:20:14.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:20:14 compute-0 nova_compute[185513]: 2026-01-06 16:20:14.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 06 16:20:15 compute-0 podman[254808]: 2026-01-06 16:20:15.85669967 +0000 UTC m=+0.110670923 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, version=9.4, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 06 16:20:16 compute-0 nova_compute[185513]: 2026-01-06 16:20:16.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:20:16 compute-0 nova_compute[185513]: 2026-01-06 16:20:16.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 06 16:20:16 compute-0 nova_compute[185513]: 2026-01-06 16:20:16.025 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 06 16:20:16 compute-0 nova_compute[185513]: 2026-01-06 16:20:16.530 185517 DEBUG nova.compute.manager [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 06 16:20:17 compute-0 nova_compute[185513]: 2026-01-06 16:20:17.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:20:19 compute-0 podman[254828]: 2026-01-06 16:20:19.892709848 +0000 UTC m=+0.143943783 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 16:20:19 compute-0 podman[254827]: 2026-01-06 16:20:19.945412815 +0000 UTC m=+0.205195553 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 06 16:20:21 compute-0 nova_compute[185513]: 2026-01-06 16:20:21.023 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:20:21 compute-0 nova_compute[185513]: 2026-01-06 16:20:21.062 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:20:21 compute-0 nova_compute[185513]: 2026-01-06 16:20:21.063 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:20:21 compute-0 nova_compute[185513]: 2026-01-06 16:20:21.063 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:20:21 compute-0 nova_compute[185513]: 2026-01-06 16:20:21.064 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 06 16:20:21 compute-0 nova_compute[185513]: 2026-01-06 16:20:21.537 185517 WARNING nova.virt.libvirt.driver [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 06 16:20:21 compute-0 nova_compute[185513]: 2026-01-06 16:20:21.539 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5644MB free_disk=72.44643020629883GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 06 16:20:21 compute-0 nova_compute[185513]: 2026-01-06 16:20:21.539 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:20:21 compute-0 nova_compute[185513]: 2026-01-06 16:20:21.540 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:20:21 compute-0 nova_compute[185513]: 2026-01-06 16:20:21.623 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 06 16:20:21 compute-0 nova_compute[185513]: 2026-01-06 16:20:21.623 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 06 16:20:21 compute-0 nova_compute[185513]: 2026-01-06 16:20:21.654 185517 DEBUG nova.compute.provider_tree [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed in ProviderTree for provider: 6e7a5a7f-91c3-4b82-b43d-f32569e61608 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 06 16:20:21 compute-0 nova_compute[185513]: 2026-01-06 16:20:21.670 185517 DEBUG nova.scheduler.client.report [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Inventory has not changed for provider 6e7a5a7f-91c3-4b82-b43d-f32569e61608 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 06 16:20:21 compute-0 nova_compute[185513]: 2026-01-06 16:20:21.672 185517 DEBUG nova.compute.resource_tracker [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 06 16:20:21 compute-0 nova_compute[185513]: 2026-01-06 16:20:21.673 185517 DEBUG oslo_concurrency.lockutils [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:20:29 compute-0 podman[201918]: time="2026-01-06T16:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:20:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:20:29 compute-0 podman[201918]: @ - - [06/Jan/2026:16:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3912 "" "Go-http-client/1.1"
Jan 06 16:20:30 compute-0 nova_compute[185513]: 2026-01-06 16:20:30.673 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:20:31 compute-0 openstack_network_exporter[205258]: ERROR   16:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:20:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:20:31 compute-0 openstack_network_exporter[205258]: ERROR   16:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:20:31 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:20:33 compute-0 podman[254877]: 2026-01-06 16:20:33.812981519 +0000 UTC m=+0.083157288 container health_status 7a32300437d9ddb605aad296f0504a90c36e407cafaeb98f89abdaa90b719487 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 06 16:20:35 compute-0 podman[254896]: 2026-01-06 16:20:35.831947935 +0000 UTC m=+0.099427137 container health_status 3eb8cf75feac748f313f0e1934cfdca8fd23729202865ed88f8faeb8b724af32 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251224)
Jan 06 16:20:35 compute-0 podman[254897]: 2026-01-06 16:20:35.836312227 +0000 UTC m=+0.098659517 container health_status 6d632224d377eac27b62b48030a965cbd36ef3b1a49b945a2201c25bfcf307b4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.openshift.expose-services=)
Jan 06 16:20:42 compute-0 podman[254937]: 2026-01-06 16:20:42.831841841 +0000 UTC m=+0.086904134 container health_status 935cf5831adb43b7d3fa0ecda7ca44a4dfd964cf870d54eda168982e1659410f (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 06 16:20:42 compute-0 podman[254936]: 2026-01-06 16:20:42.833811912 +0000 UTC m=+0.098721488 container health_status 2363b1591262eb97baef0cd2330b20e1d6a8e826cb0b2cfa4317bb3cf011fc04 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 06 16:20:45 compute-0 sshd-session[254978]: Accepted publickey for zuul from 192.168.122.10 port 38252 ssh2: ECDSA SHA256:DDwQ+JO+n+v5pJShQBlEX/UDz8vjWb0I2WAiwG3dUzE
Jan 06 16:20:45 compute-0 systemd-logind[791]: New session 32 of user zuul.
Jan 06 16:20:45 compute-0 systemd[1]: Started Session 32 of User zuul.
Jan 06 16:20:45 compute-0 sshd-session[254978]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 06 16:20:45 compute-0 sudo[254982]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 06 16:20:45 compute-0 sudo[254982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 06 16:20:46 compute-0 podman[255016]: 2026-01-06 16:20:46.1837344 +0000 UTC m=+0.103277095 container health_status f36727e67c3e891afbef1fe9312873f50d6969fb3b401496131b8eb8650e9eca (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, config_id=kepler, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, release=1214.1726694543, container_name=kepler)
Jan 06 16:20:50 compute-0 podman[255148]: 2026-01-06 16:20:50.311411407 +0000 UTC m=+0.107496194 container health_status 97cc0a4e5e44cc2ab0de65b9056eda7643b58bca3835278e650c2b6609cf662e (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 06 16:20:50 compute-0 podman[255147]: 2026-01-06 16:20:50.383707415 +0000 UTC m=+0.171906039 container health_status 79e8cb689bbf9fe60e076fc82d39a196bc51a1fd7fd3b5f4bcdfbd8a83f136f2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '2c7af564c8c005ef81bc0a341502a87a9d0f5a2b2314544edf9f5d3fb590fe02-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c-addfe1494b3bf5f4f999890fec6dc7fcb7c9162722575b8dee83f93b28cfd38c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 06 16:20:51 compute-0 ovs-vsctl[255220]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 06 16:20:52 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 255006 (sos)
Jan 06 16:20:52 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 06 16:20:52 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 06 16:20:52 compute-0 virtqemud[185235]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 06 16:20:52 compute-0 virtqemud[185235]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 06 16:20:52 compute-0 virtqemud[185235]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 06 16:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:20:53.748 107298 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 06 16:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:20:53.750 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 06 16:20:53 compute-0 ovn_metadata_agent[107293]: 2026-01-06 16:20:53.750 107298 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 06 16:20:54 compute-0 crontab[255635]: (root) LIST (root)
Jan 06 16:20:57 compute-0 systemd[1]: Starting Hostname Service...
Jan 06 16:20:57 compute-0 systemd[1]: Started Hostname Service.
Jan 06 16:20:59 compute-0 podman[201918]: time="2026-01-06T16:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 06 16:20:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 06 16:20:59 compute-0 podman[201918]: @ - - [06/Jan/2026:16:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3898 "" "Go-http-client/1.1"
Jan 06 16:21:01 compute-0 nova_compute[185513]: 2026-01-06 16:21:01.024 185517 DEBUG oslo_service.periodic_task [None req-85c6a8b7-ecd2-4f61-ac50-0d1129d047cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 06 16:21:01 compute-0 openstack_network_exporter[205258]: ERROR   16:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 06 16:21:01 compute-0 openstack_network_exporter[205258]: 
Jan 06 16:21:01 compute-0 openstack_network_exporter[205258]: ERROR   16:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 06 16:21:01 compute-0 openstack_network_exporter[205258]: 
